Feb 28 04:33:30 crc systemd[1]: Starting Kubernetes Kubelet... Feb 28 04:33:30 crc restorecon[4690]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:30 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 04:33:31 crc restorecon[4690]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 28 04:33:31 crc restorecon[4690]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 28 04:33:31 crc kubenswrapper[5014]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 28 04:33:31 crc kubenswrapper[5014]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 28 04:33:31 crc kubenswrapper[5014]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 28 04:33:31 crc kubenswrapper[5014]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 28 04:33:31 crc kubenswrapper[5014]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 28 04:33:31 crc kubenswrapper[5014]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.869453 5014 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.877951 5014 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878001 5014 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878011 5014 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878021 5014 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878030 5014 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878042 5014 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878054 5014 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878063 5014 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878073 5014 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878081 5014 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878090 5014 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878099 5014 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878106 5014 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878114 5014 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878122 5014 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878129 5014 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878138 5014 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878146 5014 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878154 5014 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878165 5014 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878174 5014 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878182 5014 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878190 5014 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878198 5014 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878209 5014 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878218 5014 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878227 5014 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878236 5014 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878244 5014 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878253 5014 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878261 5014 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878269 5014 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878278 5014 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878287 5014 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878294 5014 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878316 5014 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878324 5014 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878332 5014 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878341 5014 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878349 5014 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878357 5014 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878366 5014 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878374 5014 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878384 5014 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878392 5014 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878401 5014 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878408 5014 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878416 5014 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878424 5014 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878432 5014 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878440 5014 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878449 5014 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878457 5014 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878465 5014 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878473 5014 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878481 5014 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878488 5014 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878497 5014 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878505 5014 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878514 5014 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878521 5014 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878529 5014 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878536 5014 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878547 5014 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878556 5014 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878566 5014 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878574 5014 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878584 5014 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878594 5014 feature_gate.go:330] unrecognized feature gate: Example Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878601 5014 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.878612 5014 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.878777 5014 flags.go:64] FLAG: --address="0.0.0.0" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.878795 5014 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.878835 5014 flags.go:64] FLAG: --anonymous-auth="true" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.878849 5014 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.878861 5014 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.878871 5014 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.878883 5014 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.878894 5014 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.878905 5014 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.878914 5014 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.878924 5014 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.878936 5014 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.878945 5014 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.878955 5014 flags.go:64] FLAG: --cgroup-root="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.878964 5014 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.878973 5014 flags.go:64] FLAG: --client-ca-file="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.878983 5014 flags.go:64] FLAG: --cloud-config="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.878992 5014 flags.go:64] FLAG: --cloud-provider="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879002 5014 flags.go:64] FLAG: --cluster-dns="[]" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879016 5014 flags.go:64] FLAG: --cluster-domain="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879025 5014 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879035 5014 flags.go:64] FLAG: --config-dir="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879043 5014 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879054 5014 flags.go:64] FLAG: --container-log-max-files="5" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879074 5014 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879083 5014 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879093 5014 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879103 5014 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879113 5014 flags.go:64] FLAG: --contention-profiling="false" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879122 5014 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879131 5014 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879141 5014 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879150 5014 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879161 5014 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879170 5014 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879180 5014 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879188 5014 flags.go:64] FLAG: --enable-load-reader="false" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879197 5014 flags.go:64] FLAG: --enable-server="true" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879206 5014 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879217 5014 flags.go:64] FLAG: --event-burst="100" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879226 5014 flags.go:64] FLAG: --event-qps="50" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879235 5014 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879245 5014 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879254 5014 flags.go:64] FLAG: --eviction-hard="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879265 5014 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879274 5014 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879284 5014 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879295 5014 flags.go:64] FLAG: --eviction-soft="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879304 5014 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879314 5014 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879323 5014 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879332 5014 flags.go:64] FLAG: --experimental-mounter-path="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879341 5014 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879350 5014 flags.go:64] FLAG: --fail-swap-on="true" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879358 5014 flags.go:64] FLAG: --feature-gates="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879369 5014 flags.go:64] FLAG: --file-check-frequency="20s" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879379 5014 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879388 5014 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879397 5014 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879406 5014 flags.go:64] FLAG: --healthz-port="10248" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879416 5014 flags.go:64] FLAG: --help="false" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879425 5014 flags.go:64] FLAG: --hostname-override="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879434 5014 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879445 5014 flags.go:64] FLAG: --http-check-frequency="20s" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879455 5014 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879464 5014 flags.go:64] FLAG: --image-credential-provider-config="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879473 5014 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879482 5014 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879491 5014 flags.go:64] FLAG: --image-service-endpoint="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879499 5014 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879509 5014 flags.go:64] FLAG: --kube-api-burst="100" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879518 5014 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879528 5014 flags.go:64] FLAG: --kube-api-qps="50" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879537 5014 flags.go:64] FLAG: --kube-reserved="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879546 5014 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879555 5014 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879564 5014 flags.go:64] FLAG: --kubelet-cgroups="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879574 5014 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879583 5014 flags.go:64] FLAG: --lock-file="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879592 5014 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879601 5014 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879611 5014 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879624 5014 flags.go:64] FLAG: --log-json-split-stream="false" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879634 5014 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879643 5014 flags.go:64] FLAG: --log-text-split-stream="false" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879652 5014 flags.go:64] FLAG: --logging-format="text" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879662 5014 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879671 5014 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879680 5014 flags.go:64] FLAG: --manifest-url="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879689 5014 flags.go:64] FLAG: --manifest-url-header="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879700 5014 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879710 5014 flags.go:64] FLAG: --max-open-files="1000000" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879721 5014 flags.go:64] FLAG: --max-pods="110" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879730 5014 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879739 5014 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879749 5014 flags.go:64] FLAG: --memory-manager-policy="None" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879758 5014 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879768 5014 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879777 5014 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879786 5014 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879837 5014 flags.go:64] FLAG: --node-status-max-images="50" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879846 5014 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879856 5014 flags.go:64] FLAG: --oom-score-adj="-999" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879865 5014 flags.go:64] FLAG: --pod-cidr="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879874 5014 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879887 5014 flags.go:64] FLAG: --pod-manifest-path="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879896 5014 flags.go:64] FLAG: --pod-max-pids="-1" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879905 5014 flags.go:64] FLAG: --pods-per-core="0" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879914 5014 flags.go:64] FLAG: --port="10250" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879923 5014 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879932 5014 flags.go:64] FLAG: --provider-id="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879941 5014 flags.go:64] FLAG: --qos-reserved="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879950 5014 flags.go:64] FLAG: --read-only-port="10255" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879959 5014 flags.go:64] FLAG: --register-node="true" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879968 5014 flags.go:64] FLAG: --register-schedulable="true" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879977 5014 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.879992 5014 flags.go:64] FLAG: --registry-burst="10" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880001 5014 flags.go:64] FLAG: --registry-qps="5" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880010 5014 flags.go:64] FLAG: --reserved-cpus="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880019 5014 flags.go:64] FLAG: --reserved-memory="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880030 5014 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880039 5014 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880049 5014 flags.go:64] FLAG: --rotate-certificates="false" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880058 5014 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880068 5014 flags.go:64] FLAG: --runonce="false" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880077 5014 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880114 5014 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880127 5014 flags.go:64] FLAG: --seccomp-default="false" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880136 5014 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880145 5014 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880165 5014 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880174 5014 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880184 5014 flags.go:64] FLAG: --storage-driver-password="root" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880193 5014 flags.go:64] FLAG: --storage-driver-secure="false" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880202 5014 flags.go:64] FLAG: --storage-driver-table="stats" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880211 5014 flags.go:64] FLAG: --storage-driver-user="root" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880220 5014 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880230 5014 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880240 5014 flags.go:64] FLAG: --system-cgroups="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880249 5014 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880264 5014 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880273 5014 flags.go:64] FLAG: --tls-cert-file="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880282 5014 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880295 5014 flags.go:64] FLAG: --tls-min-version="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880304 5014 flags.go:64] FLAG: --tls-private-key-file="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880312 5014 flags.go:64] FLAG: --topology-manager-policy="none" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880321 5014 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880330 5014 flags.go:64] FLAG: --topology-manager-scope="container" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880340 5014 flags.go:64] FLAG: --v="2" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880352 5014 flags.go:64] FLAG: --version="false" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880364 5014 flags.go:64] FLAG: --vmodule="" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880374 5014 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.880383 5014 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880592 5014 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880613 5014 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880634 5014 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880645 5014 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880655 5014 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880666 5014 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880676 5014 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880686 5014 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880697 5014 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880706 5014 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880720 5014 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880730 5014 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880739 5014 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880748 5014 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880757 5014 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880765 5014 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880775 5014 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880785 5014 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880793 5014 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880834 5014 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880843 5014 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880852 5014 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880860 5014 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880868 5014 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880876 5014 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880883 5014 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880891 5014 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880899 5014 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880907 5014 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880914 5014 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880922 5014 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880929 5014 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880937 5014 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880945 5014 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880952 5014 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880960 5014 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880967 5014 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880975 5014 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880985 5014 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.880993 5014 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881001 5014 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881009 5014 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881017 5014 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881025 5014 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881033 5014 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881053 5014 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881061 5014 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881068 5014 feature_gate.go:330] unrecognized feature gate: Example Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881076 5014 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881084 5014 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881092 5014 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881102 5014 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881113 5014 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881123 5014 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881132 5014 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881141 5014 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881150 5014 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881158 5014 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881166 5014 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881174 5014 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881182 5014 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881190 5014 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881198 5014 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881206 5014 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881214 5014 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881222 5014 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881229 5014 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881237 5014 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881245 5014 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881252 5014 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.881260 5014 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.882321 5014 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.897023 5014 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.897076 5014 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897203 5014 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897217 5014 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897226 5014 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897235 5014 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897242 5014 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897250 5014 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897258 5014 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897266 5014 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897274 5014 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897283 5014 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897292 5014 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897300 5014 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897307 5014 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897315 5014 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897325 5014 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897335 5014 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897343 5014 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897351 5014 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897359 5014 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897367 5014 feature_gate.go:330] unrecognized feature gate: Example Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897375 5014 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897385 5014 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897395 5014 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897404 5014 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897413 5014 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897423 5014 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897433 5014 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897443 5014 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897452 5014 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897460 5014 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897469 5014 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897477 5014 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897486 5014 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897494 5014 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897503 5014 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897511 5014 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897519 5014 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897527 5014 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897534 5014 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897542 5014 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897549 5014 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897557 5014 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897566 5014 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897574 5014 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897583 5014 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897593 5014 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897602 5014 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897610 5014 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897618 5014 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897626 5014 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897634 5014 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897641 5014 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897649 5014 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897657 5014 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897664 5014 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897672 5014 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897679 5014 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897687 5014 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897696 5014 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897703 5014 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897711 5014 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897720 5014 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897727 5014 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897735 5014 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897744 5014 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897751 5014 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897759 5014 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897766 5014 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897773 5014 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897782 5014 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.897791 5014 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.897828 5014 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898057 5014 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898071 5014 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898081 5014 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898091 5014 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898099 5014 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898107 5014 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898117 5014 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898129 5014 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898138 5014 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898146 5014 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898154 5014 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898162 5014 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898171 5014 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898178 5014 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898186 5014 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898194 5014 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898202 5014 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898210 5014 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898217 5014 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898225 5014 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898233 5014 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898240 5014 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898248 5014 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898256 5014 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898263 5014 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898271 5014 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898278 5014 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898286 5014 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898296 5014 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898306 5014 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898314 5014 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898323 5014 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898331 5014 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898339 5014 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898348 5014 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898358 5014 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898368 5014 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898377 5014 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898385 5014 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898394 5014 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898401 5014 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898409 5014 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898416 5014 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898424 5014 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898432 5014 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898439 5014 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898449 5014 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898458 5014 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898467 5014 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898474 5014 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898482 5014 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898489 5014 feature_gate.go:330] unrecognized feature gate: Example Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898497 5014 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898504 5014 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898512 5014 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898520 5014 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898527 5014 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898535 5014 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898542 5014 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898550 5014 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898558 5014 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898565 5014 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898573 5014 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898580 5014 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898588 5014 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898596 5014 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898603 5014 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898611 5014 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898619 5014 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898627 5014 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 28 04:33:31 crc kubenswrapper[5014]: W0228 04:33:31.898637 5014 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.898649 5014 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.899024 5014 server.go:940] "Client rotation is on, will bootstrap in background" Feb 28 04:33:31 crc kubenswrapper[5014]: E0228 04:33:31.903881 5014 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.908775 5014 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.909021 5014 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.911078 5014 server.go:997] "Starting client certificate rotation" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.911135 5014 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.911406 5014 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.961565 5014 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.965002 5014 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 28 04:33:31 crc kubenswrapper[5014]: E0228 04:33:31.966417 5014 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 28 04:33:31 crc kubenswrapper[5014]: I0228 04:33:31.985393 5014 log.go:25] "Validated CRI v1 runtime API" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.041601 5014 log.go:25] "Validated CRI v1 image API" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.043742 5014 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.052680 5014 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-28-04-28-49-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.052722 5014 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.073075 5014 manager.go:217] Machine: {Timestamp:2026-02-28 04:33:32.069562528 +0000 UTC m=+0.739688478 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:ed4d7eba-154f-4bc0-9847-938dd12ba271 BootID:400c935d-cede-4f46-a04e-2bdcfad90852 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:75:4f:89 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:75:4f:89 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:75:88:bc Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:31:bf:23 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:8c:1f:b6 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:56:69:23 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:66:0d:51:31:ce:de Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:5e:0e:1d:cc:f5:b2 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.073368 5014 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.073558 5014 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.074350 5014 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.074739 5014 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.074848 5014 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.075228 5014 topology_manager.go:138] "Creating topology manager with none policy" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.075250 5014 container_manager_linux.go:303] "Creating device plugin manager" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.075962 5014 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.076016 5014 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.076360 5014 state_mem.go:36] "Initialized new in-memory state store" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.076519 5014 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.080607 5014 kubelet.go:418] "Attempting to sync node with API server" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.080686 5014 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.080728 5014 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.080752 5014 kubelet.go:324] "Adding apiserver pod source" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.080773 5014 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.085913 5014 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.087972 5014 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 28 04:33:32 crc kubenswrapper[5014]: W0228 04:33:32.089221 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:33:32 crc kubenswrapper[5014]: W0228 04:33:32.089231 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:33:32 crc kubenswrapper[5014]: E0228 04:33:32.089407 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 28 04:33:32 crc kubenswrapper[5014]: E0228 04:33:32.089466 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.089568 5014 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.091271 5014 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.091314 5014 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.091334 5014 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.091344 5014 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.091361 5014 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.091372 5014 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.091383 5014 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.091403 5014 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.091422 5014 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.091441 5014 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.091456 5014 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.091479 5014 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.091520 5014 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.092354 5014 server.go:1280] "Started kubelet" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.095202 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.095422 5014 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 28 04:33:32 crc systemd[1]: Started Kubernetes Kubelet. Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.096823 5014 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.097878 5014 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 28 04:33:32 crc kubenswrapper[5014]: E0228 04:33:32.099092 5014 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.18984ee816cebb8f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.092316559 +0000 UTC m=+0.762442499,LastTimestamp:2026-02-28 04:33:32.092316559 +0000 UTC m=+0.762442499,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.102553 5014 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.102615 5014 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.103206 5014 server.go:460] "Adding debug handlers to kubelet server" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.104215 5014 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.104401 5014 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 28 04:33:32 crc kubenswrapper[5014]: E0228 04:33:32.104241 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.104353 5014 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 28 04:33:32 crc kubenswrapper[5014]: E0228 04:33:32.104762 5014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="200ms" Feb 28 04:33:32 crc kubenswrapper[5014]: W0228 04:33:32.105197 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:33:32 crc kubenswrapper[5014]: E0228 04:33:32.105294 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.108406 5014 factory.go:55] Registering systemd factory Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.108448 5014 factory.go:221] Registration of the systemd container factory successfully Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.111246 5014 factory.go:153] Registering CRI-O factory Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.111423 5014 factory.go:221] Registration of the crio container factory successfully Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.111665 5014 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.111875 5014 factory.go:103] Registering Raw factory Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112030 5014 manager.go:1196] Started watching for new ooms in manager Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.111653 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112286 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112344 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112370 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112394 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112416 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112436 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112461 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112486 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112514 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112535 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112557 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112577 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112603 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112623 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112643 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112663 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112683 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112702 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112722 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112773 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112793 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112851 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112887 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112912 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112933 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112960 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.112983 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113005 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113026 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113117 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113140 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113170 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113196 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113217 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113237 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113259 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113278 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113301 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113321 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113346 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113370 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113392 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113414 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113434 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113455 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113477 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113498 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113522 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113544 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113573 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113593 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113628 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113651 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113674 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113696 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113757 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113782 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113803 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113848 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113869 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113897 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113921 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113940 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113963 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.113983 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114002 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114022 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114040 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114058 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114076 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114098 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114115 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114140 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114162 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114180 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114199 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114218 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114237 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114304 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114325 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114344 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114363 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114382 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114400 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114425 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114448 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114468 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114489 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114511 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114532 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114553 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114573 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114598 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114617 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114636 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114657 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114676 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114697 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114715 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114737 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114756 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114777 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114797 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114848 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114873 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114903 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114927 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114950 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114972 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.114994 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.115210 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.115231 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.115250 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.115271 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.115291 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.115312 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.115318 5014 manager.go:319] Starting recovery of all containers Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.115332 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.115780 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.115827 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.115849 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.115862 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.115878 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.115892 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.115904 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.115917 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.115931 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.115943 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.115954 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.115964 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.115975 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.115985 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.115997 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116007 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116019 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116032 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116042 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116052 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116062 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116072 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116083 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116093 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116104 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116117 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116130 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116141 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116155 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116169 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116193 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116205 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116220 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116233 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116245 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116256 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116288 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116301 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116312 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116324 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116337 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116348 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116358 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116368 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116378 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116388 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116399 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116410 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116424 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116434 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116443 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116452 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116462 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116472 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116485 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116494 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116506 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116518 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116528 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.116538 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118173 5014 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118198 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118209 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118225 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118238 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118252 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118263 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118273 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118283 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118296 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118306 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118319 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118330 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118340 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118352 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118362 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118372 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118382 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118392 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118403 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118413 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118423 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118434 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118445 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118456 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118466 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118477 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118487 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118497 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118510 5014 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118520 5014 reconstruct.go:97] "Volume reconstruction finished" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.118527 5014 reconciler.go:26] "Reconciler: start to sync state" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.141172 5014 manager.go:324] Recovery completed Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.152445 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.155036 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.155085 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.155096 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.156231 5014 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.156251 5014 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.156275 5014 state_mem.go:36] "Initialized new in-memory state store" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.166896 5014 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.170348 5014 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.170412 5014 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.170448 5014 kubelet.go:2335] "Starting kubelet main sync loop" Feb 28 04:33:32 crc kubenswrapper[5014]: E0228 04:33:32.170503 5014 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.170890 5014 policy_none.go:49] "None policy: Start" Feb 28 04:33:32 crc kubenswrapper[5014]: W0228 04:33:32.171444 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:33:32 crc kubenswrapper[5014]: E0228 04:33:32.171582 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.172473 5014 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.172503 5014 state_mem.go:35] "Initializing new in-memory state store" Feb 28 04:33:32 crc kubenswrapper[5014]: E0228 04:33:32.205192 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.235300 5014 manager.go:334] "Starting Device Plugin manager" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.235387 5014 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.235406 5014 server.go:79] "Starting device plugin registration server" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.235985 5014 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.236049 5014 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.236295 5014 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.236550 5014 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.236566 5014 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 28 04:33:32 crc kubenswrapper[5014]: E0228 04:33:32.256994 5014 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.271395 5014 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.271599 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.272993 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.273040 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.273066 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.273336 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.273633 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.273701 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.274822 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.274893 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.274912 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.275190 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.275335 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.275382 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.275451 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.275476 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.275491 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.276304 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.276335 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.276343 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.276841 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.276900 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.276912 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.277274 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.277375 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.277419 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.278579 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.278638 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.278653 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.278959 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.279005 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.279064 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.279269 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.279526 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.279791 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.280626 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.280847 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.280988 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.281612 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.281749 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.282119 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.282167 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.282179 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.283188 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.283214 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.283224 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:32 crc kubenswrapper[5014]: E0228 04:33:32.306189 5014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="400ms" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.320199 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.320290 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.320335 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.320371 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.320400 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.320429 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.320463 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.320625 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.320782 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.320885 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.320913 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.320939 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.320970 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.320997 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.321092 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.336343 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.338074 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.338134 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.338154 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.338206 5014 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 04:33:32 crc kubenswrapper[5014]: E0228 04:33:32.338859 5014 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.422311 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.422377 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.422415 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.422489 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.422529 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.422581 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.422594 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.422664 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.422633 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.422758 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.422734 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.422792 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.422840 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.422947 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.422894 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.423006 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.423138 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.423206 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.423232 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.423250 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.423275 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.423293 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.423311 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.423346 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.423339 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.423400 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.423402 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.423430 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.423461 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.423617 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.539927 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.541755 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.541801 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.541829 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.541887 5014 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 04:33:32 crc kubenswrapper[5014]: E0228 04:33:32.542360 5014 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.632232 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.655071 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.669594 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.681212 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.682475 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 28 04:33:32 crc kubenswrapper[5014]: W0228 04:33:32.686585 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-24a59d0d09f6cd5b54646f2e7440998da73834c50111898e549cfb1b30b71f3c WatchSource:0}: Error finding container 24a59d0d09f6cd5b54646f2e7440998da73834c50111898e549cfb1b30b71f3c: Status 404 returned error can't find the container with id 24a59d0d09f6cd5b54646f2e7440998da73834c50111898e549cfb1b30b71f3c Feb 28 04:33:32 crc kubenswrapper[5014]: W0228 04:33:32.688763 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-dedb9b338c2cc23d343ed629ffedc2228e1e2cb57b7f8d5bd6f88ec56c9e374b WatchSource:0}: Error finding container dedb9b338c2cc23d343ed629ffedc2228e1e2cb57b7f8d5bd6f88ec56c9e374b: Status 404 returned error can't find the container with id dedb9b338c2cc23d343ed629ffedc2228e1e2cb57b7f8d5bd6f88ec56c9e374b Feb 28 04:33:32 crc kubenswrapper[5014]: W0228 04:33:32.705364 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-8fb7743840e87e35e43a2878bf002c04d06dcff7fc0c953477249a346e7392e4 WatchSource:0}: Error finding container 8fb7743840e87e35e43a2878bf002c04d06dcff7fc0c953477249a346e7392e4: Status 404 returned error can't find the container with id 8fb7743840e87e35e43a2878bf002c04d06dcff7fc0c953477249a346e7392e4 Feb 28 04:33:32 crc kubenswrapper[5014]: E0228 04:33:32.707867 5014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="800ms" Feb 28 04:33:32 crc kubenswrapper[5014]: W0228 04:33:32.709867 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-4b7f2fcb36f9150a05d148737a31b932175d2db1c76b6f9c6217f8d89883dc0f WatchSource:0}: Error finding container 4b7f2fcb36f9150a05d148737a31b932175d2db1c76b6f9c6217f8d89883dc0f: Status 404 returned error can't find the container with id 4b7f2fcb36f9150a05d148737a31b932175d2db1c76b6f9c6217f8d89883dc0f Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.942849 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.944175 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.944208 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.944220 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:32 crc kubenswrapper[5014]: I0228 04:33:32.944252 5014 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 04:33:32 crc kubenswrapper[5014]: E0228 04:33:32.944918 5014 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Feb 28 04:33:32 crc kubenswrapper[5014]: W0228 04:33:32.964852 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:33:32 crc kubenswrapper[5014]: E0228 04:33:32.964995 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 28 04:33:33 crc kubenswrapper[5014]: W0228 04:33:33.072373 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:33:33 crc kubenswrapper[5014]: E0228 04:33:33.072562 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 28 04:33:33 crc kubenswrapper[5014]: I0228 04:33:33.096572 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:33:33 crc kubenswrapper[5014]: W0228 04:33:33.114575 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:33:33 crc kubenswrapper[5014]: E0228 04:33:33.114737 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 28 04:33:33 crc kubenswrapper[5014]: I0228 04:33:33.178641 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8ca2197c9a9bbe72d7a8afbaf9e4c3f194b0b346d5ecdeb7018144a837a9ffc1"} Feb 28 04:33:33 crc kubenswrapper[5014]: I0228 04:33:33.179957 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"24a59d0d09f6cd5b54646f2e7440998da73834c50111898e549cfb1b30b71f3c"} Feb 28 04:33:33 crc kubenswrapper[5014]: I0228 04:33:33.181260 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dedb9b338c2cc23d343ed629ffedc2228e1e2cb57b7f8d5bd6f88ec56c9e374b"} Feb 28 04:33:33 crc kubenswrapper[5014]: I0228 04:33:33.182821 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4b7f2fcb36f9150a05d148737a31b932175d2db1c76b6f9c6217f8d89883dc0f"} Feb 28 04:33:33 crc kubenswrapper[5014]: I0228 04:33:33.184151 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8fb7743840e87e35e43a2878bf002c04d06dcff7fc0c953477249a346e7392e4"} Feb 28 04:33:33 crc kubenswrapper[5014]: W0228 04:33:33.262467 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:33:33 crc kubenswrapper[5014]: E0228 04:33:33.263138 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 28 04:33:33 crc kubenswrapper[5014]: E0228 04:33:33.509484 5014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="1.6s" Feb 28 04:33:33 crc kubenswrapper[5014]: I0228 04:33:33.745961 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:33 crc kubenswrapper[5014]: I0228 04:33:33.749914 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:33 crc kubenswrapper[5014]: I0228 04:33:33.749982 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:33 crc kubenswrapper[5014]: I0228 04:33:33.749999 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:33 crc kubenswrapper[5014]: I0228 04:33:33.750040 5014 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 04:33:33 crc kubenswrapper[5014]: E0228 04:33:33.750668 5014 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.096835 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.098931 5014 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 28 04:33:34 crc kubenswrapper[5014]: E0228 04:33:34.100038 5014 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.190058 5014 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3" exitCode=0 Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.190184 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3"} Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.190305 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.191844 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.191880 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.191892 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.193602 5014 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32" exitCode=0 Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.193676 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32"} Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.193883 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.199841 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.199909 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.199922 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.205365 5014 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf" exitCode=0 Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.205552 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf"} Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.205732 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.207585 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.207679 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.207708 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.210500 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2127cf622ac7487c457107170b279eee2bd4abf6ce87378e3b8a17423a25c812"} Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.210540 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"91526622180b27c5558c8064508b1328f9196ac7ffd6f9c5f86bc616d9b6248e"} Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.210553 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"7968cb4f05079288706430331f9a9b96767af7ae0cafa8f46bf17c437a39275c"} Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.210565 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d814b990a8048b944b6ca2b8a1aa5b585368ce3a5d89b7b0993e92a291fa9fa9"} Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.210613 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.211662 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.211700 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.211712 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.212369 5014 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246" exitCode=0 Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.212416 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246"} Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.212427 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.213105 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.213139 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.213152 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.219398 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.221779 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.221917 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.221936 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:34 crc kubenswrapper[5014]: W0228 04:33:34.568973 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:33:34 crc kubenswrapper[5014]: E0228 04:33:34.569099 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 28 04:33:34 crc kubenswrapper[5014]: I0228 04:33:34.904258 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.095907 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:33:35 crc kubenswrapper[5014]: E0228 04:33:35.110354 5014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="3.2s" Feb 28 04:33:35 crc kubenswrapper[5014]: W0228 04:33:35.174066 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:33:35 crc kubenswrapper[5014]: E0228 04:33:35.174164 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.218482 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea"} Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.218554 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0"} Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.218569 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b"} Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.218585 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e"} Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.221410 5014 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005" exitCode=0 Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.221508 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005"} Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.221563 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.222991 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.223030 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.223049 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.229644 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.230225 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"0543da82ec1087bd70c14cbb530ed3ee36e372c2d8180bff84cb16d5c4d0d0eb"} Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.230276 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"17a4e3ad977a015ef15dad7b23433f35d16245c4d2d38b6008a20070f72a20e7"} Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.230295 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"849ae83a4bc8172d4bc0c361d8f565dcf9d1d71e833d4d875481bbe8b7eca349"} Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.230977 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.231018 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.231032 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.232098 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.233847 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.238678 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab"} Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.243602 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.243649 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.243663 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.244058 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.244183 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.245223 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.312621 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.350847 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.352036 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.352066 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.352075 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:35 crc kubenswrapper[5014]: I0228 04:33:35.352115 5014 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 04:33:35 crc kubenswrapper[5014]: E0228 04:33:35.352455 5014 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.150:6443: connect: connection refused" node="crc" Feb 28 04:33:35 crc kubenswrapper[5014]: W0228 04:33:35.574754 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:33:35 crc kubenswrapper[5014]: E0228 04:33:35.574860 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 28 04:33:35 crc kubenswrapper[5014]: W0228 04:33:35.676629 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:33:35 crc kubenswrapper[5014]: E0228 04:33:35.676771 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.242170 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bd13ff6e21e84ab2f948b2de1088c431a86d6426ac63ce062aa005a76d6c8a44"} Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.242548 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.243984 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.244066 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.244090 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.245529 5014 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034" exitCode=0 Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.245683 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.245707 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034"} Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.245787 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.245869 5014 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.245939 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.245950 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.247339 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.247390 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.247410 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.247555 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.247587 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.247590 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.247638 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.247657 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.247599 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.247882 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.247934 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.247953 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.259915 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.662429 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 04:33:36 crc kubenswrapper[5014]: I0228 04:33:36.674826 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 04:33:37 crc kubenswrapper[5014]: I0228 04:33:37.253233 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22"} Feb 28 04:33:37 crc kubenswrapper[5014]: I0228 04:33:37.253298 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d"} Feb 28 04:33:37 crc kubenswrapper[5014]: I0228 04:33:37.253320 5014 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 04:33:37 crc kubenswrapper[5014]: I0228 04:33:37.253365 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:37 crc kubenswrapper[5014]: I0228 04:33:37.253486 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:37 crc kubenswrapper[5014]: I0228 04:33:37.254370 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:37 crc kubenswrapper[5014]: I0228 04:33:37.254408 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:37 crc kubenswrapper[5014]: I0228 04:33:37.254417 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:37 crc kubenswrapper[5014]: I0228 04:33:37.255495 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:37 crc kubenswrapper[5014]: I0228 04:33:37.255566 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:37 crc kubenswrapper[5014]: I0228 04:33:37.255589 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:37 crc kubenswrapper[5014]: I0228 04:33:37.636018 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:33:38 crc kubenswrapper[5014]: I0228 04:33:38.260430 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f"} Feb 28 04:33:38 crc kubenswrapper[5014]: I0228 04:33:38.260538 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a"} Feb 28 04:33:38 crc kubenswrapper[5014]: I0228 04:33:38.260566 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913"} Feb 28 04:33:38 crc kubenswrapper[5014]: I0228 04:33:38.260480 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:38 crc kubenswrapper[5014]: I0228 04:33:38.260643 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:38 crc kubenswrapper[5014]: I0228 04:33:38.260480 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:38 crc kubenswrapper[5014]: I0228 04:33:38.262511 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:38 crc kubenswrapper[5014]: I0228 04:33:38.262571 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:38 crc kubenswrapper[5014]: I0228 04:33:38.262593 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:38 crc kubenswrapper[5014]: I0228 04:33:38.263798 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:38 crc kubenswrapper[5014]: I0228 04:33:38.263869 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:38 crc kubenswrapper[5014]: I0228 04:33:38.263889 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:38 crc kubenswrapper[5014]: I0228 04:33:38.263866 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:38 crc kubenswrapper[5014]: I0228 04:33:38.264009 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:38 crc kubenswrapper[5014]: I0228 04:33:38.264029 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:38 crc kubenswrapper[5014]: I0228 04:33:38.316763 5014 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 28 04:33:38 crc kubenswrapper[5014]: I0228 04:33:38.552761 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:38 crc kubenswrapper[5014]: I0228 04:33:38.554284 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:38 crc kubenswrapper[5014]: I0228 04:33:38.554312 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:38 crc kubenswrapper[5014]: I0228 04:33:38.554322 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:38 crc kubenswrapper[5014]: I0228 04:33:38.554341 5014 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 04:33:39 crc kubenswrapper[5014]: I0228 04:33:39.251255 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 04:33:39 crc kubenswrapper[5014]: I0228 04:33:39.263357 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:39 crc kubenswrapper[5014]: I0228 04:33:39.263484 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:39 crc kubenswrapper[5014]: I0228 04:33:39.263546 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:39 crc kubenswrapper[5014]: I0228 04:33:39.265446 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:39 crc kubenswrapper[5014]: I0228 04:33:39.265506 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:39 crc kubenswrapper[5014]: I0228 04:33:39.265535 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:39 crc kubenswrapper[5014]: I0228 04:33:39.265544 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:39 crc kubenswrapper[5014]: I0228 04:33:39.265562 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:39 crc kubenswrapper[5014]: I0228 04:33:39.265647 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:39 crc kubenswrapper[5014]: I0228 04:33:39.265568 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:39 crc kubenswrapper[5014]: I0228 04:33:39.265722 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:39 crc kubenswrapper[5014]: I0228 04:33:39.265765 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:40 crc kubenswrapper[5014]: I0228 04:33:40.459579 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:33:40 crc kubenswrapper[5014]: I0228 04:33:40.460517 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:40 crc kubenswrapper[5014]: I0228 04:33:40.462353 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:40 crc kubenswrapper[5014]: I0228 04:33:40.462729 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:40 crc kubenswrapper[5014]: I0228 04:33:40.462744 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:41 crc kubenswrapper[5014]: I0228 04:33:41.363857 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 04:33:41 crc kubenswrapper[5014]: I0228 04:33:41.364651 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:41 crc kubenswrapper[5014]: I0228 04:33:41.366693 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:41 crc kubenswrapper[5014]: I0228 04:33:41.366922 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:41 crc kubenswrapper[5014]: I0228 04:33:41.367083 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:42 crc kubenswrapper[5014]: I0228 04:33:42.010949 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 28 04:33:42 crc kubenswrapper[5014]: I0228 04:33:42.011201 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:42 crc kubenswrapper[5014]: I0228 04:33:42.012849 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:42 crc kubenswrapper[5014]: I0228 04:33:42.012889 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:42 crc kubenswrapper[5014]: I0228 04:33:42.012906 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:42 crc kubenswrapper[5014]: I0228 04:33:42.251257 5014 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 04:33:42 crc kubenswrapper[5014]: I0228 04:33:42.251340 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 04:33:42 crc kubenswrapper[5014]: E0228 04:33:42.257146 5014 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 04:33:43 crc kubenswrapper[5014]: I0228 04:33:43.718398 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 28 04:33:43 crc kubenswrapper[5014]: I0228 04:33:43.718631 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:43 crc kubenswrapper[5014]: I0228 04:33:43.720591 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:43 crc kubenswrapper[5014]: I0228 04:33:43.720643 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:43 crc kubenswrapper[5014]: I0228 04:33:43.720656 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:45 crc kubenswrapper[5014]: I0228 04:33:45.318956 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 04:33:45 crc kubenswrapper[5014]: I0228 04:33:45.319100 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:45 crc kubenswrapper[5014]: I0228 04:33:45.320021 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:45 crc kubenswrapper[5014]: I0228 04:33:45.320062 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:45 crc kubenswrapper[5014]: I0228 04:33:45.320075 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:46 crc kubenswrapper[5014]: I0228 04:33:46.022406 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:46Z is after 2026-02-23T05:33:13Z Feb 28 04:33:46 crc kubenswrapper[5014]: W0228 04:33:46.028438 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:46Z is after 2026-02-23T05:33:13Z Feb 28 04:33:46 crc kubenswrapper[5014]: E0228 04:33:46.028564 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:46Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 04:33:46 crc kubenswrapper[5014]: W0228 04:33:46.030595 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:46Z is after 2026-02-23T05:33:13Z Feb 28 04:33:46 crc kubenswrapper[5014]: E0228 04:33:46.030713 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:46Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 04:33:46 crc kubenswrapper[5014]: E0228 04:33:46.030892 5014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:46Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 28 04:33:46 crc kubenswrapper[5014]: E0228 04:33:46.032290 5014 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:46Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.18984ee816cebb8f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.092316559 +0000 UTC m=+0.762442499,LastTimestamp:2026-02-28 04:33:32.092316559 +0000 UTC m=+0.762442499,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:46 crc kubenswrapper[5014]: E0228 04:33:46.032653 5014 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:46Z is after 2026-02-23T05:33:13Z" node="crc" Feb 28 04:33:46 crc kubenswrapper[5014]: E0228 04:33:46.034143 5014 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:46Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 04:33:46 crc kubenswrapper[5014]: W0228 04:33:46.036253 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:46Z is after 2026-02-23T05:33:13Z Feb 28 04:33:46 crc kubenswrapper[5014]: E0228 04:33:46.036328 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:46Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 04:33:46 crc kubenswrapper[5014]: I0228 04:33:46.037077 5014 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 28 04:33:46 crc kubenswrapper[5014]: I0228 04:33:46.037141 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 28 04:33:46 crc kubenswrapper[5014]: W0228 04:33:46.037982 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:46Z is after 2026-02-23T05:33:13Z Feb 28 04:33:46 crc kubenswrapper[5014]: E0228 04:33:46.038078 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:46Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 04:33:46 crc kubenswrapper[5014]: I0228 04:33:46.044068 5014 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 28 04:33:46 crc kubenswrapper[5014]: I0228 04:33:46.044147 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Feb 28 04:33:46 crc kubenswrapper[5014]: I0228 04:33:46.099181 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:46Z is after 2026-02-23T05:33:13Z Feb 28 04:33:47 crc kubenswrapper[5014]: I0228 04:33:47.113632 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:47Z is after 2026-02-23T05:33:13Z Feb 28 04:33:47 crc kubenswrapper[5014]: I0228 04:33:47.288283 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 28 04:33:47 crc kubenswrapper[5014]: I0228 04:33:47.290887 5014 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="bd13ff6e21e84ab2f948b2de1088c431a86d6426ac63ce062aa005a76d6c8a44" exitCode=255 Feb 28 04:33:47 crc kubenswrapper[5014]: I0228 04:33:47.290950 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"bd13ff6e21e84ab2f948b2de1088c431a86d6426ac63ce062aa005a76d6c8a44"} Feb 28 04:33:47 crc kubenswrapper[5014]: I0228 04:33:47.291200 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:47 crc kubenswrapper[5014]: I0228 04:33:47.292664 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:47 crc kubenswrapper[5014]: I0228 04:33:47.292758 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:47 crc kubenswrapper[5014]: I0228 04:33:47.292779 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:47 crc kubenswrapper[5014]: I0228 04:33:47.293913 5014 scope.go:117] "RemoveContainer" containerID="bd13ff6e21e84ab2f948b2de1088c431a86d6426ac63ce062aa005a76d6c8a44" Feb 28 04:33:48 crc kubenswrapper[5014]: I0228 04:33:48.099676 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:48Z is after 2026-02-23T05:33:13Z Feb 28 04:33:48 crc kubenswrapper[5014]: I0228 04:33:48.297240 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 28 04:33:48 crc kubenswrapper[5014]: I0228 04:33:48.299134 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"869bdc9b7018b1b70b0a48e1611ad6e606cfa98d1c5fd02fafd65a13ec25c84b"} Feb 28 04:33:48 crc kubenswrapper[5014]: I0228 04:33:48.299289 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:48 crc kubenswrapper[5014]: I0228 04:33:48.300028 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:48 crc kubenswrapper[5014]: I0228 04:33:48.300054 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:48 crc kubenswrapper[5014]: I0228 04:33:48.300064 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:49 crc kubenswrapper[5014]: I0228 04:33:49.099188 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:49Z is after 2026-02-23T05:33:13Z Feb 28 04:33:49 crc kubenswrapper[5014]: I0228 04:33:49.303972 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 28 04:33:49 crc kubenswrapper[5014]: I0228 04:33:49.304413 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 28 04:33:49 crc kubenswrapper[5014]: I0228 04:33:49.306956 5014 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="869bdc9b7018b1b70b0a48e1611ad6e606cfa98d1c5fd02fafd65a13ec25c84b" exitCode=255 Feb 28 04:33:49 crc kubenswrapper[5014]: I0228 04:33:49.307005 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"869bdc9b7018b1b70b0a48e1611ad6e606cfa98d1c5fd02fafd65a13ec25c84b"} Feb 28 04:33:49 crc kubenswrapper[5014]: I0228 04:33:49.307081 5014 scope.go:117] "RemoveContainer" containerID="bd13ff6e21e84ab2f948b2de1088c431a86d6426ac63ce062aa005a76d6c8a44" Feb 28 04:33:49 crc kubenswrapper[5014]: I0228 04:33:49.307378 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:49 crc kubenswrapper[5014]: I0228 04:33:49.308965 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:49 crc kubenswrapper[5014]: I0228 04:33:49.309006 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:49 crc kubenswrapper[5014]: I0228 04:33:49.309037 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:49 crc kubenswrapper[5014]: I0228 04:33:49.309700 5014 scope.go:117] "RemoveContainer" containerID="869bdc9b7018b1b70b0a48e1611ad6e606cfa98d1c5fd02fafd65a13ec25c84b" Feb 28 04:33:49 crc kubenswrapper[5014]: E0228 04:33:49.309953 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 04:33:50 crc kubenswrapper[5014]: I0228 04:33:50.100453 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:50Z is after 2026-02-23T05:33:13Z Feb 28 04:33:50 crc kubenswrapper[5014]: I0228 04:33:50.311625 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 28 04:33:50 crc kubenswrapper[5014]: I0228 04:33:50.377874 5014 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:33:50 crc kubenswrapper[5014]: I0228 04:33:50.378107 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:50 crc kubenswrapper[5014]: I0228 04:33:50.379420 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:50 crc kubenswrapper[5014]: I0228 04:33:50.379495 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:50 crc kubenswrapper[5014]: I0228 04:33:50.379516 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:50 crc kubenswrapper[5014]: I0228 04:33:50.380590 5014 scope.go:117] "RemoveContainer" containerID="869bdc9b7018b1b70b0a48e1611ad6e606cfa98d1c5fd02fafd65a13ec25c84b" Feb 28 04:33:50 crc kubenswrapper[5014]: E0228 04:33:50.380993 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 04:33:50 crc kubenswrapper[5014]: I0228 04:33:50.466328 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:33:51 crc kubenswrapper[5014]: I0228 04:33:51.101732 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:51Z is after 2026-02-23T05:33:13Z Feb 28 04:33:51 crc kubenswrapper[5014]: I0228 04:33:51.264739 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:33:51 crc kubenswrapper[5014]: I0228 04:33:51.316902 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:51 crc kubenswrapper[5014]: I0228 04:33:51.317695 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:51 crc kubenswrapper[5014]: I0228 04:33:51.317747 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:51 crc kubenswrapper[5014]: I0228 04:33:51.317760 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:51 crc kubenswrapper[5014]: I0228 04:33:51.318427 5014 scope.go:117] "RemoveContainer" containerID="869bdc9b7018b1b70b0a48e1611ad6e606cfa98d1c5fd02fafd65a13ec25c84b" Feb 28 04:33:51 crc kubenswrapper[5014]: E0228 04:33:51.318646 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 04:33:52 crc kubenswrapper[5014]: I0228 04:33:52.098243 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:52Z is after 2026-02-23T05:33:13Z Feb 28 04:33:52 crc kubenswrapper[5014]: I0228 04:33:52.252643 5014 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 04:33:52 crc kubenswrapper[5014]: I0228 04:33:52.252710 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 04:33:52 crc kubenswrapper[5014]: E0228 04:33:52.257978 5014 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 04:33:52 crc kubenswrapper[5014]: I0228 04:33:52.319563 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:52 crc kubenswrapper[5014]: I0228 04:33:52.320889 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:52 crc kubenswrapper[5014]: I0228 04:33:52.320934 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:52 crc kubenswrapper[5014]: I0228 04:33:52.320949 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:52 crc kubenswrapper[5014]: I0228 04:33:52.321862 5014 scope.go:117] "RemoveContainer" containerID="869bdc9b7018b1b70b0a48e1611ad6e606cfa98d1c5fd02fafd65a13ec25c84b" Feb 28 04:33:52 crc kubenswrapper[5014]: E0228 04:33:52.322090 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 04:33:52 crc kubenswrapper[5014]: I0228 04:33:52.432921 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:52 crc kubenswrapper[5014]: I0228 04:33:52.434306 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:52 crc kubenswrapper[5014]: I0228 04:33:52.434392 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:52 crc kubenswrapper[5014]: I0228 04:33:52.434410 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:52 crc kubenswrapper[5014]: I0228 04:33:52.434445 5014 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 04:33:52 crc kubenswrapper[5014]: E0228 04:33:52.436927 5014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:52Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 28 04:33:52 crc kubenswrapper[5014]: E0228 04:33:52.438075 5014 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:52Z is after 2026-02-23T05:33:13Z" node="crc" Feb 28 04:33:52 crc kubenswrapper[5014]: W0228 04:33:52.889430 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:52Z is after 2026-02-23T05:33:13Z Feb 28 04:33:52 crc kubenswrapper[5014]: E0228 04:33:52.889523 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:52Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 28 04:33:53 crc kubenswrapper[5014]: I0228 04:33:53.099495 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:33:53Z is after 2026-02-23T05:33:13Z Feb 28 04:33:53 crc kubenswrapper[5014]: I0228 04:33:53.755743 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 28 04:33:53 crc kubenswrapper[5014]: I0228 04:33:53.756119 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:53 crc kubenswrapper[5014]: I0228 04:33:53.762332 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:53 crc kubenswrapper[5014]: I0228 04:33:53.762399 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:53 crc kubenswrapper[5014]: I0228 04:33:53.762415 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:53 crc kubenswrapper[5014]: I0228 04:33:53.784135 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 28 04:33:54 crc kubenswrapper[5014]: I0228 04:33:54.101087 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:33:54 crc kubenswrapper[5014]: I0228 04:33:54.324081 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:54 crc kubenswrapper[5014]: I0228 04:33:54.325006 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:54 crc kubenswrapper[5014]: I0228 04:33:54.325074 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:54 crc kubenswrapper[5014]: I0228 04:33:54.325095 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:54 crc kubenswrapper[5014]: W0228 04:33:54.513910 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 28 04:33:54 crc kubenswrapper[5014]: E0228 04:33:54.513961 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 28 04:33:54 crc kubenswrapper[5014]: I0228 04:33:54.609048 5014 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 28 04:33:54 crc kubenswrapper[5014]: I0228 04:33:54.629482 5014 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 28 04:33:54 crc kubenswrapper[5014]: W0228 04:33:54.995536 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 28 04:33:54 crc kubenswrapper[5014]: E0228 04:33:54.995617 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 28 04:33:55 crc kubenswrapper[5014]: I0228 04:33:55.099481 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.037390 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee816cebb8f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.092316559 +0000 UTC m=+0.762442499,LastTimestamp:2026-02-28 04:33:32.092316559 +0000 UTC m=+0.762442499,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.039769 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee81a8c3c82 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.155067522 +0000 UTC m=+0.825193432,LastTimestamp:2026-02-28 04:33:32.155067522 +0000 UTC m=+0.825193432,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.046761 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee81a8c9f75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.155092853 +0000 UTC m=+0.825218753,LastTimestamp:2026-02-28 04:33:32.155092853 +0000 UTC m=+0.825218753,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.052713 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee81a8ccb67 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.155104103 +0000 UTC m=+0.825230013,LastTimestamp:2026-02-28 04:33:32.155104103 +0000 UTC m=+0.825230013,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.056390 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee81f9392d6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.239434454 +0000 UTC m=+0.909560364,LastTimestamp:2026-02-28 04:33:32.239434454 +0000 UTC m=+0.909560364,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.060238 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984ee81a8c3c82\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee81a8c3c82 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.155067522 +0000 UTC m=+0.825193432,LastTimestamp:2026-02-28 04:33:32.273022061 +0000 UTC m=+0.943147991,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.063589 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984ee81a8c9f75\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee81a8c9f75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.155092853 +0000 UTC m=+0.825218753,LastTimestamp:2026-02-28 04:33:32.273050922 +0000 UTC m=+0.943176842,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.067392 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984ee81a8ccb67\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee81a8ccb67 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.155104103 +0000 UTC m=+0.825230013,LastTimestamp:2026-02-28 04:33:32.273075543 +0000 UTC m=+0.943201473,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.071078 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984ee81a8c3c82\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee81a8c3c82 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.155067522 +0000 UTC m=+0.825193432,LastTimestamp:2026-02-28 04:33:32.274853098 +0000 UTC m=+0.944979018,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.074772 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984ee81a8c9f75\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee81a8c9f75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.155092853 +0000 UTC m=+0.825218753,LastTimestamp:2026-02-28 04:33:32.27490512 +0000 UTC m=+0.945031030,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.078121 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984ee81a8ccb67\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee81a8ccb67 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.155104103 +0000 UTC m=+0.825230013,LastTimestamp:2026-02-28 04:33:32.27491857 +0000 UTC m=+0.945044480,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.081224 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984ee81a8c3c82\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee81a8c3c82 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.155067522 +0000 UTC m=+0.825193432,LastTimestamp:2026-02-28 04:33:32.27546929 +0000 UTC m=+0.945595210,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.084255 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984ee81a8c9f75\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee81a8c9f75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.155092853 +0000 UTC m=+0.825218753,LastTimestamp:2026-02-28 04:33:32.275487331 +0000 UTC m=+0.945613251,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.088264 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984ee81a8ccb67\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee81a8ccb67 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.155104103 +0000 UTC m=+0.825230013,LastTimestamp:2026-02-28 04:33:32.275497891 +0000 UTC m=+0.945623811,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.093000 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984ee81a8c3c82\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee81a8c3c82 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.155067522 +0000 UTC m=+0.825193432,LastTimestamp:2026-02-28 04:33:32.276328432 +0000 UTC m=+0.946454342,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.096529 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984ee81a8c9f75\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee81a8c9f75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.155092853 +0000 UTC m=+0.825218753,LastTimestamp:2026-02-28 04:33:32.276340313 +0000 UTC m=+0.946466223,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: I0228 04:33:56.096990 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.101259 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984ee81a8ccb67\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee81a8ccb67 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.155104103 +0000 UTC m=+0.825230013,LastTimestamp:2026-02-28 04:33:32.276348703 +0000 UTC m=+0.946474613,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.106232 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984ee81a8c3c82\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee81a8c3c82 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.155067522 +0000 UTC m=+0.825193432,LastTimestamp:2026-02-28 04:33:32.276861702 +0000 UTC m=+0.946987612,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.111174 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984ee81a8c9f75\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee81a8c9f75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.155092853 +0000 UTC m=+0.825218753,LastTimestamp:2026-02-28 04:33:32.276907763 +0000 UTC m=+0.947033673,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.115469 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984ee81a8ccb67\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee81a8ccb67 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.155104103 +0000 UTC m=+0.825230013,LastTimestamp:2026-02-28 04:33:32.276917264 +0000 UTC m=+0.947043164,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.120082 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984ee81a8c3c82\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee81a8c3c82 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.155067522 +0000 UTC m=+0.825193432,LastTimestamp:2026-02-28 04:33:32.278623085 +0000 UTC m=+0.948748985,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.124285 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984ee81a8c9f75\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee81a8c9f75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.155092853 +0000 UTC m=+0.825218753,LastTimestamp:2026-02-28 04:33:32.278646806 +0000 UTC m=+0.948772716,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.129412 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984ee81a8ccb67\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee81a8ccb67 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.155104103 +0000 UTC m=+0.825230013,LastTimestamp:2026-02-28 04:33:32.278659697 +0000 UTC m=+0.948785597,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.134362 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984ee81a8c3c82\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee81a8c3c82 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.155067522 +0000 UTC m=+0.825193432,LastTimestamp:2026-02-28 04:33:32.2789939 +0000 UTC m=+0.949119820,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.140004 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.18984ee81a8c9f75\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.18984ee81a8c9f75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.155092853 +0000 UTC m=+0.825218753,LastTimestamp:2026-02-28 04:33:32.279015131 +0000 UTC m=+0.949141061,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.142619 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984ee83aaa4ff1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.693909489 +0000 UTC m=+1.364035429,LastTimestamp:2026-02-28 04:33:32.693909489 +0000 UTC m=+1.364035429,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.144576 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984ee83aabb7ea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.694001642 +0000 UTC m=+1.364127582,LastTimestamp:2026-02-28 04:33:32.694001642 +0000 UTC m=+1.364127582,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.148195 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18984ee83b82408e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.708061326 +0000 UTC m=+1.378187276,LastTimestamp:2026-02-28 04:33:32.708061326 +0000 UTC m=+1.378187276,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.149627 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984ee83b85cc3e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.708293694 +0000 UTC m=+1.378420004,LastTimestamp:2026-02-28 04:33:32.708293694 +0000 UTC m=+1.378420004,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.152875 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18984ee83c09dc50 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:32.71694856 +0000 UTC m=+1.387074480,LastTimestamp:2026-02-28 04:33:32.71694856 +0000 UTC m=+1.387074480,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.156445 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984ee85f218be9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:33.305703401 +0000 UTC m=+1.975829311,LastTimestamp:2026-02-28 04:33:33.305703401 +0000 UTC m=+1.975829311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.161828 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984ee85f24647d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:33.305889917 +0000 UTC m=+1.976015827,LastTimestamp:2026-02-28 04:33:33.305889917 +0000 UTC m=+1.976015827,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.165737 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18984ee85f24e8d7 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:33.305923799 +0000 UTC m=+1.976049719,LastTimestamp:2026-02-28 04:33:33.305923799 +0000 UTC m=+1.976049719,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.170361 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984ee85f38fb45 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:33.307239237 +0000 UTC m=+1.977365157,LastTimestamp:2026-02-28 04:33:33.307239237 +0000 UTC m=+1.977365157,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.177399 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18984ee85f6df61c openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:33.310711324 +0000 UTC m=+1.980837254,LastTimestamp:2026-02-28 04:33:33.310711324 +0000 UTC m=+1.980837254,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.181480 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984ee85fd808c5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:33.317662917 +0000 UTC m=+1.987788827,LastTimestamp:2026-02-28 04:33:33.317662917 +0000 UTC m=+1.987788827,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.186036 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984ee85ff83386 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:33.319771014 +0000 UTC m=+1.989896924,LastTimestamp:2026-02-28 04:33:33.319771014 +0000 UTC m=+1.989896924,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.189957 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18984ee86006b91f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:33.320722719 +0000 UTC m=+1.990848649,LastTimestamp:2026-02-28 04:33:33.320722719 +0000 UTC m=+1.990848649,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.193547 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984ee8601f4cf4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:33.322333428 +0000 UTC m=+1.992459348,LastTimestamp:2026-02-28 04:33:33.322333428 +0000 UTC m=+1.992459348,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.198057 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18984ee860720f5a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:33.327757146 +0000 UTC m=+1.997883056,LastTimestamp:2026-02-28 04:33:33.327757146 +0000 UTC m=+1.997883056,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.204290 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984ee8607d1a36 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:33.328480822 +0000 UTC m=+1.998606732,LastTimestamp:2026-02-28 04:33:33.328480822 +0000 UTC m=+1.998606732,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.208058 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984ee871646d55 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:33.612076373 +0000 UTC m=+2.282202303,LastTimestamp:2026-02-28 04:33:33.612076373 +0000 UTC m=+2.282202303,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.211910 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984ee8722fc65a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:33.62540297 +0000 UTC m=+2.295528910,LastTimestamp:2026-02-28 04:33:33.62540297 +0000 UTC m=+2.295528910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.218038 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984ee87245dffc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:33.626851324 +0000 UTC m=+2.296977274,LastTimestamp:2026-02-28 04:33:33.626851324 +0000 UTC m=+2.296977274,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.224247 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984ee88138247e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:33.877609598 +0000 UTC m=+2.547735548,LastTimestamp:2026-02-28 04:33:33.877609598 +0000 UTC m=+2.547735548,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.230566 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984ee88235f5af openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:33.894243759 +0000 UTC m=+2.564369679,LastTimestamp:2026-02-28 04:33:33.894243759 +0000 UTC m=+2.564369679,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.235873 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984ee8824bdb9d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:33.895678877 +0000 UTC m=+2.565804787,LastTimestamp:2026-02-28 04:33:33.895678877 +0000 UTC m=+2.565804787,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.241371 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984ee88cc0135b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.071067483 +0000 UTC m=+2.741193403,LastTimestamp:2026-02-28 04:33:34.071067483 +0000 UTC m=+2.741193403,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.259245 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984ee88d711f68 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.08267044 +0000 UTC m=+2.752796350,LastTimestamp:2026-02-28 04:33:34.08267044 +0000 UTC m=+2.752796350,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.283308 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984ee8940ae6e6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.193411814 +0000 UTC m=+2.863537724,LastTimestamp:2026-02-28 04:33:34.193411814 +0000 UTC m=+2.863537724,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.288280 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18984ee894b974c5 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.204851397 +0000 UTC m=+2.874977347,LastTimestamp:2026-02-28 04:33:34.204851397 +0000 UTC m=+2.874977347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.292903 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984ee8958842e0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.218404576 +0000 UTC m=+2.888530486,LastTimestamp:2026-02-28 04:33:34.218404576 +0000 UTC m=+2.888530486,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.297326 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18984ee895894106 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.218469638 +0000 UTC m=+2.888595548,LastTimestamp:2026-02-28 04:33:34.218469638 +0000 UTC m=+2.888595548,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: W0228 04:33:56.300622 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.300664 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.300675 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18984ee8a369f944 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.451300676 +0000 UTC m=+3.121426586,LastTimestamp:2026-02-28 04:33:34.451300676 +0000 UTC m=+3.121426586,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.304050 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984ee8a36b55e8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.451389928 +0000 UTC m=+3.121515848,LastTimestamp:2026-02-28 04:33:34.451389928 +0000 UTC m=+3.121515848,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.307650 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984ee8a36d7c64 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.451530852 +0000 UTC m=+3.121656772,LastTimestamp:2026-02-28 04:33:34.451530852 +0000 UTC m=+3.121656772,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.311842 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18984ee8a36e1139 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.451568953 +0000 UTC m=+3.121694863,LastTimestamp:2026-02-28 04:33:34.451568953 +0000 UTC m=+3.121694863,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.315333 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18984ee8a4af0e39 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.472605241 +0000 UTC m=+3.142731151,LastTimestamp:2026-02-28 04:33:34.472605241 +0000 UTC m=+3.142731151,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.319533 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18984ee8a4c75ae4 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.474197732 +0000 UTC m=+3.144323642,LastTimestamp:2026-02-28 04:33:34.474197732 +0000 UTC m=+3.144323642,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.323117 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984ee8a53ffbfc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.482103292 +0000 UTC m=+3.152229202,LastTimestamp:2026-02-28 04:33:34.482103292 +0000 UTC m=+3.152229202,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.326846 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.18984ee8a5464156 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.482514262 +0000 UTC m=+3.152640172,LastTimestamp:2026-02-28 04:33:34.482514262 +0000 UTC m=+3.152640172,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.330033 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984ee8a576a80e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.485686286 +0000 UTC m=+3.155812216,LastTimestamp:2026-02-28 04:33:34.485686286 +0000 UTC m=+3.155812216,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.334552 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984ee8a59a0918 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.488004888 +0000 UTC m=+3.158130798,LastTimestamp:2026-02-28 04:33:34.488004888 +0000 UTC m=+3.158130798,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.337783 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18984ee8b18093bd openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.687663037 +0000 UTC m=+3.357788947,LastTimestamp:2026-02-28 04:33:34.687663037 +0000 UTC m=+3.357788947,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.341289 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984ee8b1d5bafd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.693243645 +0000 UTC m=+3.363369555,LastTimestamp:2026-02-28 04:33:34.693243645 +0000 UTC m=+3.363369555,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.344864 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18984ee8b23da362 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.700053346 +0000 UTC m=+3.370179246,LastTimestamp:2026-02-28 04:33:34.700053346 +0000 UTC m=+3.370179246,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.346252 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18984ee8b24cb663 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.701041251 +0000 UTC m=+3.371167161,LastTimestamp:2026-02-28 04:33:34.701041251 +0000 UTC m=+3.371167161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.348122 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984ee8b2a95040 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.707109952 +0000 UTC m=+3.377235862,LastTimestamp:2026-02-28 04:33:34.707109952 +0000 UTC m=+3.377235862,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.350406 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984ee8b2e5d264 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.711075428 +0000 UTC m=+3.381201338,LastTimestamp:2026-02-28 04:33:34.711075428 +0000 UTC m=+3.381201338,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.353317 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18984ee8be87c239 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.906237497 +0000 UTC m=+3.576363407,LastTimestamp:2026-02-28 04:33:34.906237497 +0000 UTC m=+3.576363407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.356555 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984ee8be89dd7f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.906375551 +0000 UTC m=+3.576501461,LastTimestamp:2026-02-28 04:33:34.906375551 +0000 UTC m=+3.576501461,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.359978 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.18984ee8bf4ff757 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.919358295 +0000 UTC m=+3.589484205,LastTimestamp:2026-02-28 04:33:34.919358295 +0000 UTC m=+3.589484205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.363659 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984ee8bfa20f22 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.924738338 +0000 UTC m=+3.594864248,LastTimestamp:2026-02-28 04:33:34.924738338 +0000 UTC m=+3.594864248,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.366894 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984ee8bfb4ac7e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:34.92595827 +0000 UTC m=+3.596084180,LastTimestamp:2026-02-28 04:33:34.92595827 +0000 UTC m=+3.596084180,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.370212 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984ee8c9494234 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:35.086690868 +0000 UTC m=+3.756816788,LastTimestamp:2026-02-28 04:33:35.086690868 +0000 UTC m=+3.756816788,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.374001 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984ee8c9cc4cea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:35.095278826 +0000 UTC m=+3.765404746,LastTimestamp:2026-02-28 04:33:35.095278826 +0000 UTC m=+3.765404746,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.380041 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984ee8c9db73cd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:35.096271821 +0000 UTC m=+3.766397741,LastTimestamp:2026-02-28 04:33:35.096271821 +0000 UTC m=+3.766397741,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.383896 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984ee8d1873535 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:35.224968501 +0000 UTC m=+3.895094411,LastTimestamp:2026-02-28 04:33:35.224968501 +0000 UTC m=+3.895094411,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.388166 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984ee8d5cdd28a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:35.296705162 +0000 UTC m=+3.966831072,LastTimestamp:2026-02-28 04:33:35.296705162 +0000 UTC m=+3.966831072,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.392005 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984ee8d6ad0c4f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:35.311334479 +0000 UTC m=+3.981460389,LastTimestamp:2026-02-28 04:33:35.311334479 +0000 UTC m=+3.981460389,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.395772 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984ee8dd254da9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:35.419878825 +0000 UTC m=+4.090004735,LastTimestamp:2026-02-28 04:33:35.419878825 +0000 UTC m=+4.090004735,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.399224 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984ee8ddf0ee7f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:35.433223807 +0000 UTC m=+4.103349717,LastTimestamp:2026-02-28 04:33:35.433223807 +0000 UTC m=+4.103349717,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.403896 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984ee90e9cc84e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:36.24979259 +0000 UTC m=+4.919918540,LastTimestamp:2026-02-28 04:33:36.24979259 +0000 UTC m=+4.919918540,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.407141 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984ee91b3b9f89 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:36.461528969 +0000 UTC m=+5.131654869,LastTimestamp:2026-02-28 04:33:36.461528969 +0000 UTC m=+5.131654869,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.410475 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984ee91bf2f7bd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:36.473544637 +0000 UTC m=+5.143670547,LastTimestamp:2026-02-28 04:33:36.473544637 +0000 UTC m=+5.143670547,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.414115 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984ee91c083345 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:36.474936133 +0000 UTC m=+5.145062043,LastTimestamp:2026-02-28 04:33:36.474936133 +0000 UTC m=+5.145062043,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.417584 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984ee92b8b33da openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:36.735179738 +0000 UTC m=+5.405305658,LastTimestamp:2026-02-28 04:33:36.735179738 +0000 UTC m=+5.405305658,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.422676 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984ee92c53e936 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:36.748333366 +0000 UTC m=+5.418459276,LastTimestamp:2026-02-28 04:33:36.748333366 +0000 UTC m=+5.418459276,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.426532 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984ee92c67ea21 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:36.749644321 +0000 UTC m=+5.419770241,LastTimestamp:2026-02-28 04:33:36.749644321 +0000 UTC m=+5.419770241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.430732 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984ee9595d7953 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:37.503934803 +0000 UTC m=+6.174060753,LastTimestamp:2026-02-28 04:33:37.503934803 +0000 UTC m=+6.174060753,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.434132 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984ee95d1f1064 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:37.566953572 +0000 UTC m=+6.237079512,LastTimestamp:2026-02-28 04:33:37.566953572 +0000 UTC m=+6.237079512,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.437403 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984ee95d40aa10 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:37.5691556 +0000 UTC m=+6.239281540,LastTimestamp:2026-02-28 04:33:37.5691556 +0000 UTC m=+6.239281540,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.440860 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984ee96eab3efb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:37.861353211 +0000 UTC m=+6.531479161,LastTimestamp:2026-02-28 04:33:37.861353211 +0000 UTC m=+6.531479161,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.443857 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984ee96fe83982 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:37.882126722 +0000 UTC m=+6.552252682,LastTimestamp:2026-02-28 04:33:37.882126722 +0000 UTC m=+6.552252682,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.447136 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984ee96ffb5a8c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:37.883380364 +0000 UTC m=+6.553506304,LastTimestamp:2026-02-28 04:33:37.883380364 +0000 UTC m=+6.553506304,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.448617 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984ee97f009912 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:38.13538229 +0000 UTC m=+6.805508200,LastTimestamp:2026-02-28 04:33:38.13538229 +0000 UTC m=+6.805508200,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.452241 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.18984ee97f9f565c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:38.145785436 +0000 UTC m=+6.815911346,LastTimestamp:2026-02-28 04:33:38.145785436 +0000 UTC m=+6.815911346,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.457405 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 28 04:33:56 crc kubenswrapper[5014]: &Event{ObjectMeta:{kube-controller-manager-crc.18984eea7454c7ea openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 28 04:33:56 crc kubenswrapper[5014]: body: Feb 28 04:33:56 crc kubenswrapper[5014]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:42.251317226 +0000 UTC m=+10.921443136,LastTimestamp:2026-02-28 04:33:42.251317226 +0000 UTC m=+10.921443136,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 04:33:56 crc kubenswrapper[5014]: > Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.462697 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984eea7455cc27 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:42.251383847 +0000 UTC m=+10.921509757,LastTimestamp:2026-02-28 04:33:42.251383847 +0000 UTC m=+10.921509757,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.466398 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 28 04:33:56 crc kubenswrapper[5014]: &Event{ObjectMeta:{kube-apiserver-crc.18984eeb55fb94d1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 28 04:33:56 crc kubenswrapper[5014]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 28 04:33:56 crc kubenswrapper[5014]: Feb 28 04:33:56 crc kubenswrapper[5014]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:46.037122257 +0000 UTC m=+14.707248167,LastTimestamp:2026-02-28 04:33:46.037122257 +0000 UTC m=+14.707248167,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 04:33:56 crc kubenswrapper[5014]: > Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.469426 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984eeb55fc4e22 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:46.037169698 +0000 UTC m=+14.707295618,LastTimestamp:2026-02-28 04:33:46.037169698 +0000 UTC m=+14.707295618,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.473460 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18984eeb55fb94d1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 28 04:33:56 crc kubenswrapper[5014]: &Event{ObjectMeta:{kube-apiserver-crc.18984eeb55fb94d1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Feb 28 04:33:56 crc kubenswrapper[5014]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Feb 28 04:33:56 crc kubenswrapper[5014]: Feb 28 04:33:56 crc kubenswrapper[5014]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:46.037122257 +0000 UTC m=+14.707248167,LastTimestamp:2026-02-28 04:33:46.044123172 +0000 UTC m=+14.714249082,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 04:33:56 crc kubenswrapper[5014]: > Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.476845 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18984eeb55fc4e22\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984eeb55fc4e22 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:46.037169698 +0000 UTC m=+14.707295618,LastTimestamp:2026-02-28 04:33:46.044180023 +0000 UTC m=+14.714305933,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.481158 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18984ee8c9db73cd\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984ee8c9db73cd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:35.096271821 +0000 UTC m=+3.766397741,LastTimestamp:2026-02-28 04:33:47.295361629 +0000 UTC m=+15.965487579,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.486338 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18984ee8d5cdd28a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984ee8d5cdd28a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:35.296705162 +0000 UTC m=+3.966831072,LastTimestamp:2026-02-28 04:33:47.483294087 +0000 UTC m=+16.153419997,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.490363 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.18984ee8d6ad0c4f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.18984ee8d6ad0c4f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:35.311334479 +0000 UTC m=+3.981460389,LastTimestamp:2026-02-28 04:33:47.490306572 +0000 UTC m=+16.160432472,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.494174 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18984eea7454c7ea\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 28 04:33:56 crc kubenswrapper[5014]: &Event{ObjectMeta:{kube-controller-manager-crc.18984eea7454c7ea openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 28 04:33:56 crc kubenswrapper[5014]: body: Feb 28 04:33:56 crc kubenswrapper[5014]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:42.251317226 +0000 UTC m=+10.921443136,LastTimestamp:2026-02-28 04:33:52.252694263 +0000 UTC m=+20.922820183,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 04:33:56 crc kubenswrapper[5014]: > Feb 28 04:33:56 crc kubenswrapper[5014]: E0228 04:33:56.497488 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18984eea7455cc27\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984eea7455cc27 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:42.251383847 +0000 UTC m=+10.921509757,LastTimestamp:2026-02-28 04:33:52.252735114 +0000 UTC m=+20.922861034,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:33:57 crc kubenswrapper[5014]: I0228 04:33:57.100541 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:33:57 crc kubenswrapper[5014]: I0228 04:33:57.636759 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:33:57 crc kubenswrapper[5014]: I0228 04:33:57.636969 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:57 crc kubenswrapper[5014]: I0228 04:33:57.638088 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:57 crc kubenswrapper[5014]: I0228 04:33:57.638153 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:57 crc kubenswrapper[5014]: I0228 04:33:57.638172 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:57 crc kubenswrapper[5014]: I0228 04:33:57.639104 5014 scope.go:117] "RemoveContainer" containerID="869bdc9b7018b1b70b0a48e1611ad6e606cfa98d1c5fd02fafd65a13ec25c84b" Feb 28 04:33:57 crc kubenswrapper[5014]: E0228 04:33:57.639392 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 04:33:58 crc kubenswrapper[5014]: I0228 04:33:58.100576 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:33:59 crc kubenswrapper[5014]: I0228 04:33:59.100334 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:33:59 crc kubenswrapper[5014]: I0228 04:33:59.438374 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:33:59 crc kubenswrapper[5014]: I0228 04:33:59.440347 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:33:59 crc kubenswrapper[5014]: I0228 04:33:59.440506 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:33:59 crc kubenswrapper[5014]: I0228 04:33:59.440606 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:33:59 crc kubenswrapper[5014]: I0228 04:33:59.440749 5014 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 04:33:59 crc kubenswrapper[5014]: E0228 04:33:59.444906 5014 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 04:33:59 crc kubenswrapper[5014]: E0228 04:33:59.444956 5014 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 04:34:00 crc kubenswrapper[5014]: I0228 04:34:00.101870 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:01 crc kubenswrapper[5014]: I0228 04:34:01.101311 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:02 crc kubenswrapper[5014]: I0228 04:34:02.099707 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:02 crc kubenswrapper[5014]: I0228 04:34:02.252117 5014 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 04:34:02 crc kubenswrapper[5014]: I0228 04:34:02.252202 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 04:34:02 crc kubenswrapper[5014]: I0228 04:34:02.252274 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 04:34:02 crc kubenswrapper[5014]: I0228 04:34:02.252437 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:34:02 crc kubenswrapper[5014]: I0228 04:34:02.253375 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:02 crc kubenswrapper[5014]: I0228 04:34:02.253400 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:02 crc kubenswrapper[5014]: I0228 04:34:02.253414 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:02 crc kubenswrapper[5014]: I0228 04:34:02.254113 5014 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"7968cb4f05079288706430331f9a9b96767af7ae0cafa8f46bf17c437a39275c"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 28 04:34:02 crc kubenswrapper[5014]: I0228 04:34:02.254347 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://7968cb4f05079288706430331f9a9b96767af7ae0cafa8f46bf17c437a39275c" gracePeriod=30 Feb 28 04:34:02 crc kubenswrapper[5014]: E0228 04:34:02.259513 5014 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 04:34:02 crc kubenswrapper[5014]: E0228 04:34:02.261471 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18984eea7454c7ea\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 28 04:34:02 crc kubenswrapper[5014]: &Event{ObjectMeta:{kube-controller-manager-crc.18984eea7454c7ea openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 28 04:34:02 crc kubenswrapper[5014]: body: Feb 28 04:34:02 crc kubenswrapper[5014]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:42.251317226 +0000 UTC m=+10.921443136,LastTimestamp:2026-02-28 04:34:02.252179104 +0000 UTC m=+30.922305014,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 04:34:02 crc kubenswrapper[5014]: > Feb 28 04:34:02 crc kubenswrapper[5014]: E0228 04:34:02.266799 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18984eea7455cc27\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984eea7455cc27 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:42.251383847 +0000 UTC m=+10.921509757,LastTimestamp:2026-02-28 04:34:02.252236826 +0000 UTC m=+30.922362736,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:34:02 crc kubenswrapper[5014]: E0228 04:34:02.273041 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984eef1c9a86f8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:34:02.254329592 +0000 UTC m=+30.924455512,LastTimestamp:2026-02-28 04:34:02.254329592 +0000 UTC m=+30.924455512,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:34:02 crc kubenswrapper[5014]: E0228 04:34:02.387437 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18984ee85ff83386\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984ee85ff83386 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:33.319771014 +0000 UTC m=+1.989896924,LastTimestamp:2026-02-28 04:34:02.378036983 +0000 UTC m=+31.048162893,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:34:02 crc kubenswrapper[5014]: E0228 04:34:02.575581 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18984ee871646d55\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984ee871646d55 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:33.612076373 +0000 UTC m=+2.282202303,LastTimestamp:2026-02-28 04:34:02.567971928 +0000 UTC m=+31.238097828,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:34:02 crc kubenswrapper[5014]: E0228 04:34:02.590076 5014 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.18984ee8722fc65a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984ee8722fc65a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:33:33.62540297 +0000 UTC m=+2.295528910,LastTimestamp:2026-02-28 04:34:02.582186803 +0000 UTC m=+31.252312713,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:34:03 crc kubenswrapper[5014]: I0228 04:34:03.100688 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:03 crc kubenswrapper[5014]: I0228 04:34:03.351987 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 28 04:34:03 crc kubenswrapper[5014]: I0228 04:34:03.352564 5014 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="7968cb4f05079288706430331f9a9b96767af7ae0cafa8f46bf17c437a39275c" exitCode=255 Feb 28 04:34:03 crc kubenswrapper[5014]: I0228 04:34:03.352618 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"7968cb4f05079288706430331f9a9b96767af7ae0cafa8f46bf17c437a39275c"} Feb 28 04:34:03 crc kubenswrapper[5014]: I0228 04:34:03.352701 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"714067117f5f948a99fcf525805dbc84659d7486a7d90a54aa54a9b924c7cbd7"} Feb 28 04:34:03 crc kubenswrapper[5014]: I0228 04:34:03.352957 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:34:03 crc kubenswrapper[5014]: I0228 04:34:03.354751 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:03 crc kubenswrapper[5014]: I0228 04:34:03.354802 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:03 crc kubenswrapper[5014]: I0228 04:34:03.354839 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:04 crc kubenswrapper[5014]: I0228 04:34:04.103933 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:04 crc kubenswrapper[5014]: I0228 04:34:04.905402 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 04:34:04 crc kubenswrapper[5014]: I0228 04:34:04.905680 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:34:04 crc kubenswrapper[5014]: I0228 04:34:04.907573 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:04 crc kubenswrapper[5014]: I0228 04:34:04.907634 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:04 crc kubenswrapper[5014]: I0228 04:34:04.907645 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:05 crc kubenswrapper[5014]: I0228 04:34:05.100754 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:06 crc kubenswrapper[5014]: I0228 04:34:06.103521 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:06 crc kubenswrapper[5014]: I0228 04:34:06.446135 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:34:06 crc kubenswrapper[5014]: I0228 04:34:06.448520 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:06 crc kubenswrapper[5014]: I0228 04:34:06.448656 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:06 crc kubenswrapper[5014]: I0228 04:34:06.449084 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:06 crc kubenswrapper[5014]: I0228 04:34:06.449202 5014 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 04:34:06 crc kubenswrapper[5014]: E0228 04:34:06.452546 5014 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 04:34:06 crc kubenswrapper[5014]: E0228 04:34:06.453181 5014 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 04:34:07 crc kubenswrapper[5014]: I0228 04:34:07.103137 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:08 crc kubenswrapper[5014]: I0228 04:34:08.102865 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:09 crc kubenswrapper[5014]: I0228 04:34:09.097903 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:09 crc kubenswrapper[5014]: I0228 04:34:09.252065 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 04:34:09 crc kubenswrapper[5014]: I0228 04:34:09.252489 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:34:09 crc kubenswrapper[5014]: I0228 04:34:09.254627 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:09 crc kubenswrapper[5014]: I0228 04:34:09.254739 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:09 crc kubenswrapper[5014]: I0228 04:34:09.254765 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:09 crc kubenswrapper[5014]: W0228 04:34:09.619520 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 28 04:34:09 crc kubenswrapper[5014]: E0228 04:34:09.619620 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 28 04:34:10 crc kubenswrapper[5014]: I0228 04:34:10.102673 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:11 crc kubenswrapper[5014]: I0228 04:34:11.102164 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:12 crc kubenswrapper[5014]: I0228 04:34:12.101269 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:12 crc kubenswrapper[5014]: I0228 04:34:12.252044 5014 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 04:34:12 crc kubenswrapper[5014]: I0228 04:34:12.252241 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 04:34:12 crc kubenswrapper[5014]: E0228 04:34:12.258592 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 28 04:34:12 crc kubenswrapper[5014]: &Event{ObjectMeta:{kube-controller-manager-crc.18984ef170860ab6 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 28 04:34:12 crc kubenswrapper[5014]: body: Feb 28 04:34:12 crc kubenswrapper[5014]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:34:12.252207798 +0000 UTC m=+40.922333748,LastTimestamp:2026-02-28 04:34:12.252207798 +0000 UTC m=+40.922333748,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 28 04:34:12 crc kubenswrapper[5014]: > Feb 28 04:34:12 crc kubenswrapper[5014]: E0228 04:34:12.259616 5014 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 04:34:12 crc kubenswrapper[5014]: E0228 04:34:12.266383 5014 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.18984ef170880b25 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:34:12.252338981 +0000 UTC m=+40.922464931,LastTimestamp:2026-02-28 04:34:12.252338981 +0000 UTC m=+40.922464931,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:34:13 crc kubenswrapper[5014]: I0228 04:34:13.103952 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:13 crc kubenswrapper[5014]: I0228 04:34:13.171238 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:34:13 crc kubenswrapper[5014]: I0228 04:34:13.174030 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:13 crc kubenswrapper[5014]: I0228 04:34:13.174126 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:13 crc kubenswrapper[5014]: I0228 04:34:13.174150 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:13 crc kubenswrapper[5014]: I0228 04:34:13.175355 5014 scope.go:117] "RemoveContainer" containerID="869bdc9b7018b1b70b0a48e1611ad6e606cfa98d1c5fd02fafd65a13ec25c84b" Feb 28 04:34:13 crc kubenswrapper[5014]: I0228 04:34:13.452879 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:34:13 crc kubenswrapper[5014]: I0228 04:34:13.454533 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:13 crc kubenswrapper[5014]: I0228 04:34:13.454588 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:13 crc kubenswrapper[5014]: I0228 04:34:13.454612 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:13 crc kubenswrapper[5014]: I0228 04:34:13.454654 5014 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 04:34:13 crc kubenswrapper[5014]: E0228 04:34:13.461177 5014 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 04:34:13 crc kubenswrapper[5014]: E0228 04:34:13.461699 5014 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 04:34:14 crc kubenswrapper[5014]: I0228 04:34:14.104459 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:14 crc kubenswrapper[5014]: W0228 04:34:14.282362 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 28 04:34:14 crc kubenswrapper[5014]: E0228 04:34:14.282440 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 28 04:34:14 crc kubenswrapper[5014]: I0228 04:34:14.392594 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 28 04:34:14 crc kubenswrapper[5014]: I0228 04:34:14.393762 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 28 04:34:14 crc kubenswrapper[5014]: I0228 04:34:14.397574 5014 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b7c458a4854c14b9241df3405d8ff37c267bad8d68195a53353501612ecc46f0" exitCode=255 Feb 28 04:34:14 crc kubenswrapper[5014]: I0228 04:34:14.397671 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b7c458a4854c14b9241df3405d8ff37c267bad8d68195a53353501612ecc46f0"} Feb 28 04:34:14 crc kubenswrapper[5014]: I0228 04:34:14.397772 5014 scope.go:117] "RemoveContainer" containerID="869bdc9b7018b1b70b0a48e1611ad6e606cfa98d1c5fd02fafd65a13ec25c84b" Feb 28 04:34:14 crc kubenswrapper[5014]: I0228 04:34:14.398032 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:34:14 crc kubenswrapper[5014]: I0228 04:34:14.400301 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:14 crc kubenswrapper[5014]: I0228 04:34:14.400369 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:14 crc kubenswrapper[5014]: I0228 04:34:14.400388 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:14 crc kubenswrapper[5014]: I0228 04:34:14.401558 5014 scope.go:117] "RemoveContainer" containerID="b7c458a4854c14b9241df3405d8ff37c267bad8d68195a53353501612ecc46f0" Feb 28 04:34:14 crc kubenswrapper[5014]: E0228 04:34:14.402607 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 04:34:15 crc kubenswrapper[5014]: I0228 04:34:15.102870 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:15 crc kubenswrapper[5014]: I0228 04:34:15.403011 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 28 04:34:15 crc kubenswrapper[5014]: W0228 04:34:15.450270 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 28 04:34:15 crc kubenswrapper[5014]: E0228 04:34:15.450328 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 28 04:34:16 crc kubenswrapper[5014]: I0228 04:34:16.101968 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:17 crc kubenswrapper[5014]: I0228 04:34:17.100902 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:17 crc kubenswrapper[5014]: I0228 04:34:17.637042 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:34:17 crc kubenswrapper[5014]: I0228 04:34:17.637375 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:34:17 crc kubenswrapper[5014]: I0228 04:34:17.639283 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:17 crc kubenswrapper[5014]: I0228 04:34:17.639370 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:17 crc kubenswrapper[5014]: I0228 04:34:17.639404 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:17 crc kubenswrapper[5014]: I0228 04:34:17.640625 5014 scope.go:117] "RemoveContainer" containerID="b7c458a4854c14b9241df3405d8ff37c267bad8d68195a53353501612ecc46f0" Feb 28 04:34:17 crc kubenswrapper[5014]: E0228 04:34:17.640988 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 04:34:18 crc kubenswrapper[5014]: I0228 04:34:18.103053 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:19 crc kubenswrapper[5014]: I0228 04:34:19.103789 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:19 crc kubenswrapper[5014]: W0228 04:34:19.143613 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:19 crc kubenswrapper[5014]: E0228 04:34:19.143673 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 28 04:34:19 crc kubenswrapper[5014]: I0228 04:34:19.255156 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 04:34:19 crc kubenswrapper[5014]: I0228 04:34:19.255326 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:34:19 crc kubenswrapper[5014]: I0228 04:34:19.256514 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:19 crc kubenswrapper[5014]: I0228 04:34:19.256544 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:19 crc kubenswrapper[5014]: I0228 04:34:19.256554 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:19 crc kubenswrapper[5014]: I0228 04:34:19.262401 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 04:34:19 crc kubenswrapper[5014]: I0228 04:34:19.419103 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:34:19 crc kubenswrapper[5014]: I0228 04:34:19.420467 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:19 crc kubenswrapper[5014]: I0228 04:34:19.420534 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:19 crc kubenswrapper[5014]: I0228 04:34:19.420552 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:20 crc kubenswrapper[5014]: I0228 04:34:20.101579 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:20 crc kubenswrapper[5014]: I0228 04:34:20.377919 5014 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:34:20 crc kubenswrapper[5014]: I0228 04:34:20.378064 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:34:20 crc kubenswrapper[5014]: I0228 04:34:20.378966 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:20 crc kubenswrapper[5014]: I0228 04:34:20.378997 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:20 crc kubenswrapper[5014]: I0228 04:34:20.379007 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:20 crc kubenswrapper[5014]: I0228 04:34:20.379435 5014 scope.go:117] "RemoveContainer" containerID="b7c458a4854c14b9241df3405d8ff37c267bad8d68195a53353501612ecc46f0" Feb 28 04:34:20 crc kubenswrapper[5014]: E0228 04:34:20.379576 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 04:34:20 crc kubenswrapper[5014]: I0228 04:34:20.462005 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:34:20 crc kubenswrapper[5014]: I0228 04:34:20.462922 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:20 crc kubenswrapper[5014]: I0228 04:34:20.462961 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:20 crc kubenswrapper[5014]: I0228 04:34:20.463001 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:20 crc kubenswrapper[5014]: I0228 04:34:20.463023 5014 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 04:34:20 crc kubenswrapper[5014]: E0228 04:34:20.466503 5014 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 04:34:20 crc kubenswrapper[5014]: E0228 04:34:20.466561 5014 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 04:34:21 crc kubenswrapper[5014]: I0228 04:34:21.102270 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:21 crc kubenswrapper[5014]: I0228 04:34:21.369101 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 28 04:34:21 crc kubenswrapper[5014]: I0228 04:34:21.369332 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:34:21 crc kubenswrapper[5014]: I0228 04:34:21.370774 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:21 crc kubenswrapper[5014]: I0228 04:34:21.370837 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:21 crc kubenswrapper[5014]: I0228 04:34:21.370850 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:22 crc kubenswrapper[5014]: I0228 04:34:22.106540 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:22 crc kubenswrapper[5014]: E0228 04:34:22.259740 5014 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 04:34:23 crc kubenswrapper[5014]: I0228 04:34:23.101761 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:24 crc kubenswrapper[5014]: I0228 04:34:24.100787 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:25 crc kubenswrapper[5014]: I0228 04:34:25.103245 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:26 crc kubenswrapper[5014]: I0228 04:34:26.101104 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:27 crc kubenswrapper[5014]: I0228 04:34:27.100070 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:27 crc kubenswrapper[5014]: I0228 04:34:27.466951 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:34:27 crc kubenswrapper[5014]: I0228 04:34:27.468438 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:27 crc kubenswrapper[5014]: I0228 04:34:27.468600 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:27 crc kubenswrapper[5014]: I0228 04:34:27.468728 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:27 crc kubenswrapper[5014]: I0228 04:34:27.468885 5014 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 04:34:27 crc kubenswrapper[5014]: E0228 04:34:27.471793 5014 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 04:34:27 crc kubenswrapper[5014]: E0228 04:34:27.472296 5014 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 04:34:28 crc kubenswrapper[5014]: I0228 04:34:28.103907 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:29 crc kubenswrapper[5014]: I0228 04:34:29.099843 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:30 crc kubenswrapper[5014]: I0228 04:34:30.100530 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:31 crc kubenswrapper[5014]: I0228 04:34:31.102983 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:32 crc kubenswrapper[5014]: I0228 04:34:32.103123 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:32 crc kubenswrapper[5014]: I0228 04:34:32.170951 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:34:32 crc kubenswrapper[5014]: I0228 04:34:32.172049 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:32 crc kubenswrapper[5014]: I0228 04:34:32.172088 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:32 crc kubenswrapper[5014]: I0228 04:34:32.172105 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:32 crc kubenswrapper[5014]: I0228 04:34:32.172714 5014 scope.go:117] "RemoveContainer" containerID="b7c458a4854c14b9241df3405d8ff37c267bad8d68195a53353501612ecc46f0" Feb 28 04:34:32 crc kubenswrapper[5014]: E0228 04:34:32.172917 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 04:34:32 crc kubenswrapper[5014]: E0228 04:34:32.260191 5014 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 04:34:33 crc kubenswrapper[5014]: I0228 04:34:33.101028 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:34 crc kubenswrapper[5014]: I0228 04:34:34.099974 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:34 crc kubenswrapper[5014]: I0228 04:34:34.471897 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:34:34 crc kubenswrapper[5014]: I0228 04:34:34.473649 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:34 crc kubenswrapper[5014]: I0228 04:34:34.473699 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:34 crc kubenswrapper[5014]: I0228 04:34:34.473717 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:34 crc kubenswrapper[5014]: I0228 04:34:34.473747 5014 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 04:34:34 crc kubenswrapper[5014]: E0228 04:34:34.477703 5014 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 28 04:34:34 crc kubenswrapper[5014]: E0228 04:34:34.477839 5014 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 28 04:34:35 crc kubenswrapper[5014]: I0228 04:34:35.101438 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:35 crc kubenswrapper[5014]: W0228 04:34:35.518188 5014 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 28 04:34:35 crc kubenswrapper[5014]: E0228 04:34:35.518529 5014 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 28 04:34:36 crc kubenswrapper[5014]: I0228 04:34:36.102322 5014 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 28 04:34:36 crc kubenswrapper[5014]: I0228 04:34:36.649169 5014 csr.go:261] certificate signing request csr-j49hn is approved, waiting to be issued Feb 28 04:34:36 crc kubenswrapper[5014]: I0228 04:34:36.683864 5014 csr.go:257] certificate signing request csr-j49hn is issued Feb 28 04:34:36 crc kubenswrapper[5014]: I0228 04:34:36.715485 5014 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 28 04:34:36 crc kubenswrapper[5014]: I0228 04:34:36.911302 5014 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 28 04:34:37 crc kubenswrapper[5014]: I0228 04:34:37.684646 5014 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-12 18:52:35.981123798 +0000 UTC Feb 28 04:34:37 crc kubenswrapper[5014]: I0228 04:34:37.684693 5014 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6182h17m58.296433846s for next certificate rotation Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.478530 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.479907 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.479949 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.479964 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.480079 5014 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.490599 5014 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.491079 5014 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 28 04:34:41 crc kubenswrapper[5014]: E0228 04:34:41.491101 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.494692 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.494755 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.494767 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.494785 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.494816 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:34:41Z","lastTransitionTime":"2026-02-28T04:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:34:41 crc kubenswrapper[5014]: E0228 04:34:41.508684 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.516883 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.516937 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.516949 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.516967 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.516980 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:34:41Z","lastTransitionTime":"2026-02-28T04:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:34:41 crc kubenswrapper[5014]: E0228 04:34:41.526558 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.534210 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.534328 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.534416 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.534511 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.534602 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:34:41Z","lastTransitionTime":"2026-02-28T04:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:34:41 crc kubenswrapper[5014]: E0228 04:34:41.545437 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.553598 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.553627 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.553638 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.553652 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:34:41 crc kubenswrapper[5014]: I0228 04:34:41.553665 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:34:41Z","lastTransitionTime":"2026-02-28T04:34:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:34:41 crc kubenswrapper[5014]: E0228 04:34:41.564886 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:34:41 crc kubenswrapper[5014]: E0228 04:34:41.565032 5014 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 04:34:41 crc kubenswrapper[5014]: E0228 04:34:41.565063 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:41 crc kubenswrapper[5014]: E0228 04:34:41.665967 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:41 crc kubenswrapper[5014]: E0228 04:34:41.766639 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:41 crc kubenswrapper[5014]: E0228 04:34:41.866951 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:41 crc kubenswrapper[5014]: E0228 04:34:41.967437 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:42 crc kubenswrapper[5014]: E0228 04:34:42.068404 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:42 crc kubenswrapper[5014]: E0228 04:34:42.168728 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:42 crc kubenswrapper[5014]: E0228 04:34:42.261364 5014 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 04:34:42 crc kubenswrapper[5014]: E0228 04:34:42.269746 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:42 crc kubenswrapper[5014]: E0228 04:34:42.370173 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:42 crc kubenswrapper[5014]: E0228 04:34:42.470694 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:42 crc kubenswrapper[5014]: E0228 04:34:42.571187 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:42 crc kubenswrapper[5014]: E0228 04:34:42.672861 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:42 crc kubenswrapper[5014]: E0228 04:34:42.773410 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:42 crc kubenswrapper[5014]: E0228 04:34:42.874285 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:42 crc kubenswrapper[5014]: E0228 04:34:42.974535 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:43 crc kubenswrapper[5014]: E0228 04:34:43.075586 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:43 crc kubenswrapper[5014]: E0228 04:34:43.175695 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:43 crc kubenswrapper[5014]: E0228 04:34:43.276474 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:43 crc kubenswrapper[5014]: E0228 04:34:43.377555 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:43 crc kubenswrapper[5014]: E0228 04:34:43.477916 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:43 crc kubenswrapper[5014]: E0228 04:34:43.578978 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:43 crc kubenswrapper[5014]: E0228 04:34:43.679336 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:43 crc kubenswrapper[5014]: E0228 04:34:43.779912 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:43 crc kubenswrapper[5014]: E0228 04:34:43.880066 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:43 crc kubenswrapper[5014]: E0228 04:34:43.980350 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:44 crc kubenswrapper[5014]: E0228 04:34:44.080907 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:44 crc kubenswrapper[5014]: E0228 04:34:44.181233 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:44 crc kubenswrapper[5014]: E0228 04:34:44.281849 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:44 crc kubenswrapper[5014]: E0228 04:34:44.382919 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:44 crc kubenswrapper[5014]: E0228 04:34:44.483481 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:44 crc kubenswrapper[5014]: E0228 04:34:44.584053 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:44 crc kubenswrapper[5014]: E0228 04:34:44.685069 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:44 crc kubenswrapper[5014]: E0228 04:34:44.785996 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:44 crc kubenswrapper[5014]: E0228 04:34:44.886982 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:44 crc kubenswrapper[5014]: E0228 04:34:44.987659 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:45 crc kubenswrapper[5014]: E0228 04:34:45.088167 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:45 crc kubenswrapper[5014]: E0228 04:34:45.188521 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:45 crc kubenswrapper[5014]: E0228 04:34:45.288636 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:45 crc kubenswrapper[5014]: E0228 04:34:45.389516 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:45 crc kubenswrapper[5014]: E0228 04:34:45.490203 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:45 crc kubenswrapper[5014]: E0228 04:34:45.591009 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:45 crc kubenswrapper[5014]: E0228 04:34:45.691927 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:45 crc kubenswrapper[5014]: E0228 04:34:45.792658 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:45 crc kubenswrapper[5014]: E0228 04:34:45.893167 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:45 crc kubenswrapper[5014]: E0228 04:34:45.994304 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:46 crc kubenswrapper[5014]: E0228 04:34:46.095426 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:46 crc kubenswrapper[5014]: E0228 04:34:46.196377 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:46 crc kubenswrapper[5014]: E0228 04:34:46.297293 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:46 crc kubenswrapper[5014]: E0228 04:34:46.398464 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:46 crc kubenswrapper[5014]: E0228 04:34:46.498939 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:46 crc kubenswrapper[5014]: E0228 04:34:46.599437 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:46 crc kubenswrapper[5014]: E0228 04:34:46.699957 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:46 crc kubenswrapper[5014]: E0228 04:34:46.800082 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:46 crc kubenswrapper[5014]: E0228 04:34:46.900408 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:47 crc kubenswrapper[5014]: E0228 04:34:47.001071 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:47 crc kubenswrapper[5014]: E0228 04:34:47.102255 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:47 crc kubenswrapper[5014]: I0228 04:34:47.170793 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:34:47 crc kubenswrapper[5014]: I0228 04:34:47.173030 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:47 crc kubenswrapper[5014]: I0228 04:34:47.173072 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:47 crc kubenswrapper[5014]: I0228 04:34:47.173082 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:47 crc kubenswrapper[5014]: I0228 04:34:47.173795 5014 scope.go:117] "RemoveContainer" containerID="b7c458a4854c14b9241df3405d8ff37c267bad8d68195a53353501612ecc46f0" Feb 28 04:34:47 crc kubenswrapper[5014]: E0228 04:34:47.202927 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:47 crc kubenswrapper[5014]: E0228 04:34:47.303857 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:47 crc kubenswrapper[5014]: E0228 04:34:47.404177 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:47 crc kubenswrapper[5014]: E0228 04:34:47.505135 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:47 crc kubenswrapper[5014]: I0228 04:34:47.509880 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 28 04:34:47 crc kubenswrapper[5014]: I0228 04:34:47.513714 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24"} Feb 28 04:34:47 crc kubenswrapper[5014]: I0228 04:34:47.513940 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:34:47 crc kubenswrapper[5014]: I0228 04:34:47.515035 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:47 crc kubenswrapper[5014]: I0228 04:34:47.515074 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:47 crc kubenswrapper[5014]: I0228 04:34:47.515113 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:47 crc kubenswrapper[5014]: E0228 04:34:47.605380 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:47 crc kubenswrapper[5014]: I0228 04:34:47.636923 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:34:47 crc kubenswrapper[5014]: E0228 04:34:47.705718 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:47 crc kubenswrapper[5014]: E0228 04:34:47.806417 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:47 crc kubenswrapper[5014]: I0228 04:34:47.809871 5014 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 28 04:34:47 crc kubenswrapper[5014]: E0228 04:34:47.907015 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:48 crc kubenswrapper[5014]: E0228 04:34:48.008630 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:48 crc kubenswrapper[5014]: E0228 04:34:48.109836 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:48 crc kubenswrapper[5014]: E0228 04:34:48.211019 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:48 crc kubenswrapper[5014]: E0228 04:34:48.311626 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:48 crc kubenswrapper[5014]: E0228 04:34:48.411978 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:48 crc kubenswrapper[5014]: E0228 04:34:48.512297 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:48 crc kubenswrapper[5014]: I0228 04:34:48.521980 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 28 04:34:48 crc kubenswrapper[5014]: I0228 04:34:48.522705 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 28 04:34:48 crc kubenswrapper[5014]: I0228 04:34:48.526103 5014 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24" exitCode=255 Feb 28 04:34:48 crc kubenswrapper[5014]: I0228 04:34:48.526180 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24"} Feb 28 04:34:48 crc kubenswrapper[5014]: I0228 04:34:48.526264 5014 scope.go:117] "RemoveContainer" containerID="b7c458a4854c14b9241df3405d8ff37c267bad8d68195a53353501612ecc46f0" Feb 28 04:34:48 crc kubenswrapper[5014]: I0228 04:34:48.526283 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:34:48 crc kubenswrapper[5014]: I0228 04:34:48.527554 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:48 crc kubenswrapper[5014]: I0228 04:34:48.527614 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:48 crc kubenswrapper[5014]: I0228 04:34:48.527632 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:48 crc kubenswrapper[5014]: I0228 04:34:48.528476 5014 scope.go:117] "RemoveContainer" containerID="acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24" Feb 28 04:34:48 crc kubenswrapper[5014]: E0228 04:34:48.528744 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 04:34:48 crc kubenswrapper[5014]: I0228 04:34:48.544928 5014 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 28 04:34:48 crc kubenswrapper[5014]: E0228 04:34:48.613396 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:48 crc kubenswrapper[5014]: E0228 04:34:48.714418 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:48 crc kubenswrapper[5014]: E0228 04:34:48.815186 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:48 crc kubenswrapper[5014]: E0228 04:34:48.916249 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:49 crc kubenswrapper[5014]: E0228 04:34:49.016833 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:49 crc kubenswrapper[5014]: E0228 04:34:49.116955 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:49 crc kubenswrapper[5014]: E0228 04:34:49.217668 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:49 crc kubenswrapper[5014]: E0228 04:34:49.318286 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:49 crc kubenswrapper[5014]: E0228 04:34:49.419230 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:49 crc kubenswrapper[5014]: E0228 04:34:49.520454 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:49 crc kubenswrapper[5014]: I0228 04:34:49.532933 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 28 04:34:49 crc kubenswrapper[5014]: I0228 04:34:49.535975 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:34:49 crc kubenswrapper[5014]: I0228 04:34:49.537668 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:49 crc kubenswrapper[5014]: I0228 04:34:49.537719 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:49 crc kubenswrapper[5014]: I0228 04:34:49.537733 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:49 crc kubenswrapper[5014]: I0228 04:34:49.538401 5014 scope.go:117] "RemoveContainer" containerID="acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24" Feb 28 04:34:49 crc kubenswrapper[5014]: E0228 04:34:49.538578 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 04:34:49 crc kubenswrapper[5014]: E0228 04:34:49.621497 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:49 crc kubenswrapper[5014]: E0228 04:34:49.721895 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:49 crc kubenswrapper[5014]: E0228 04:34:49.822858 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:49 crc kubenswrapper[5014]: E0228 04:34:49.923644 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:50 crc kubenswrapper[5014]: E0228 04:34:50.024115 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:50 crc kubenswrapper[5014]: E0228 04:34:50.124590 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:50 crc kubenswrapper[5014]: E0228 04:34:50.225536 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:50 crc kubenswrapper[5014]: E0228 04:34:50.325785 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:50 crc kubenswrapper[5014]: I0228 04:34:50.378092 5014 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:34:50 crc kubenswrapper[5014]: E0228 04:34:50.426797 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:50 crc kubenswrapper[5014]: E0228 04:34:50.527537 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:50 crc kubenswrapper[5014]: I0228 04:34:50.539125 5014 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 28 04:34:50 crc kubenswrapper[5014]: I0228 04:34:50.540916 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:50 crc kubenswrapper[5014]: I0228 04:34:50.540976 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:50 crc kubenswrapper[5014]: I0228 04:34:50.540996 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:50 crc kubenswrapper[5014]: I0228 04:34:50.541923 5014 scope.go:117] "RemoveContainer" containerID="acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24" Feb 28 04:34:50 crc kubenswrapper[5014]: E0228 04:34:50.542163 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 04:34:50 crc kubenswrapper[5014]: E0228 04:34:50.629022 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:50 crc kubenswrapper[5014]: E0228 04:34:50.729877 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:50 crc kubenswrapper[5014]: E0228 04:34:50.830125 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:50 crc kubenswrapper[5014]: E0228 04:34:50.931262 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:51 crc kubenswrapper[5014]: E0228 04:34:51.031651 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:51 crc kubenswrapper[5014]: E0228 04:34:51.132870 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:51 crc kubenswrapper[5014]: E0228 04:34:51.233888 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:51 crc kubenswrapper[5014]: E0228 04:34:51.334167 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:51 crc kubenswrapper[5014]: E0228 04:34:51.435346 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:51 crc kubenswrapper[5014]: E0228 04:34:51.535900 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:51 crc kubenswrapper[5014]: E0228 04:34:51.637083 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:51 crc kubenswrapper[5014]: E0228 04:34:51.737972 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:51 crc kubenswrapper[5014]: E0228 04:34:51.838983 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:51 crc kubenswrapper[5014]: E0228 04:34:51.939447 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:51 crc kubenswrapper[5014]: E0228 04:34:51.964067 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 28 04:34:51 crc kubenswrapper[5014]: I0228 04:34:51.969064 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:51 crc kubenswrapper[5014]: I0228 04:34:51.969106 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:51 crc kubenswrapper[5014]: I0228 04:34:51.969118 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:51 crc kubenswrapper[5014]: I0228 04:34:51.969134 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:34:51 crc kubenswrapper[5014]: I0228 04:34:51.969146 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:34:51Z","lastTransitionTime":"2026-02-28T04:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:34:51 crc kubenswrapper[5014]: E0228 04:34:51.980826 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:34:51 crc kubenswrapper[5014]: I0228 04:34:51.985379 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:51 crc kubenswrapper[5014]: I0228 04:34:51.985434 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:51 crc kubenswrapper[5014]: I0228 04:34:51.985449 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:51 crc kubenswrapper[5014]: I0228 04:34:51.985464 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:34:51 crc kubenswrapper[5014]: I0228 04:34:51.985879 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:34:51Z","lastTransitionTime":"2026-02-28T04:34:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:34:51 crc kubenswrapper[5014]: E0228 04:34:51.998670 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:34:52 crc kubenswrapper[5014]: I0228 04:34:52.004663 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:52 crc kubenswrapper[5014]: I0228 04:34:52.004700 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:52 crc kubenswrapper[5014]: I0228 04:34:52.004709 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:52 crc kubenswrapper[5014]: I0228 04:34:52.004753 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:34:52 crc kubenswrapper[5014]: I0228 04:34:52.004768 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:34:52Z","lastTransitionTime":"2026-02-28T04:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:34:52 crc kubenswrapper[5014]: E0228 04:34:52.015111 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:34:52 crc kubenswrapper[5014]: I0228 04:34:52.022077 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:52 crc kubenswrapper[5014]: I0228 04:34:52.022123 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:52 crc kubenswrapper[5014]: I0228 04:34:52.022133 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:52 crc kubenswrapper[5014]: I0228 04:34:52.022149 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:34:52 crc kubenswrapper[5014]: I0228 04:34:52.022161 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:34:52Z","lastTransitionTime":"2026-02-28T04:34:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:34:52 crc kubenswrapper[5014]: E0228 04:34:52.032936 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:52Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:52Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:34:52Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:52Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:34:52 crc kubenswrapper[5014]: E0228 04:34:52.033177 5014 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 04:34:52 crc kubenswrapper[5014]: E0228 04:34:52.040519 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:52 crc kubenswrapper[5014]: E0228 04:34:52.141557 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:52 crc kubenswrapper[5014]: E0228 04:34:52.242458 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:52 crc kubenswrapper[5014]: E0228 04:34:52.261633 5014 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 28 04:34:52 crc kubenswrapper[5014]: E0228 04:34:52.343279 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:52 crc kubenswrapper[5014]: E0228 04:34:52.443694 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:52 crc kubenswrapper[5014]: E0228 04:34:52.544060 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:52 crc kubenswrapper[5014]: E0228 04:34:52.644216 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:52 crc kubenswrapper[5014]: E0228 04:34:52.745269 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:52 crc kubenswrapper[5014]: E0228 04:34:52.846272 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:52 crc kubenswrapper[5014]: E0228 04:34:52.946446 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:53 crc kubenswrapper[5014]: E0228 04:34:53.046644 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:53 crc kubenswrapper[5014]: E0228 04:34:53.147776 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:53 crc kubenswrapper[5014]: E0228 04:34:53.248204 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:53 crc kubenswrapper[5014]: E0228 04:34:53.349185 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:53 crc kubenswrapper[5014]: E0228 04:34:53.450202 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:53 crc kubenswrapper[5014]: E0228 04:34:53.550271 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:53 crc kubenswrapper[5014]: E0228 04:34:53.650763 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:53 crc kubenswrapper[5014]: E0228 04:34:53.751889 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:53 crc kubenswrapper[5014]: E0228 04:34:53.852874 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:53 crc kubenswrapper[5014]: E0228 04:34:53.954073 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:54 crc kubenswrapper[5014]: E0228 04:34:54.054731 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:54 crc kubenswrapper[5014]: E0228 04:34:54.154919 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:54 crc kubenswrapper[5014]: E0228 04:34:54.255080 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:54 crc kubenswrapper[5014]: E0228 04:34:54.355216 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:54 crc kubenswrapper[5014]: E0228 04:34:54.455408 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:54 crc kubenswrapper[5014]: E0228 04:34:54.555887 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:54 crc kubenswrapper[5014]: E0228 04:34:54.656416 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:54 crc kubenswrapper[5014]: E0228 04:34:54.756687 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:54 crc kubenswrapper[5014]: E0228 04:34:54.857878 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:54 crc kubenswrapper[5014]: E0228 04:34:54.958388 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:55 crc kubenswrapper[5014]: E0228 04:34:55.059258 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:55 crc kubenswrapper[5014]: E0228 04:34:55.160191 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:55 crc kubenswrapper[5014]: E0228 04:34:55.261280 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:55 crc kubenswrapper[5014]: E0228 04:34:55.362438 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:55 crc kubenswrapper[5014]: E0228 04:34:55.463478 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:55 crc kubenswrapper[5014]: E0228 04:34:55.564002 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:55 crc kubenswrapper[5014]: E0228 04:34:55.664903 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:55 crc kubenswrapper[5014]: E0228 04:34:55.765315 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:55 crc kubenswrapper[5014]: E0228 04:34:55.866056 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:55 crc kubenswrapper[5014]: E0228 04:34:55.967198 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:56 crc kubenswrapper[5014]: E0228 04:34:56.068141 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:56 crc kubenswrapper[5014]: E0228 04:34:56.168457 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:56 crc kubenswrapper[5014]: E0228 04:34:56.269510 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:56 crc kubenswrapper[5014]: E0228 04:34:56.369923 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:56 crc kubenswrapper[5014]: E0228 04:34:56.470418 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:56 crc kubenswrapper[5014]: E0228 04:34:56.570585 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:56 crc kubenswrapper[5014]: E0228 04:34:56.670734 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:56 crc kubenswrapper[5014]: E0228 04:34:56.771752 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:56 crc kubenswrapper[5014]: E0228 04:34:56.872710 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:56 crc kubenswrapper[5014]: E0228 04:34:56.973205 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:57 crc kubenswrapper[5014]: E0228 04:34:57.073971 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:57 crc kubenswrapper[5014]: E0228 04:34:57.174782 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:57 crc kubenswrapper[5014]: E0228 04:34:57.275740 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:57 crc kubenswrapper[5014]: E0228 04:34:57.376899 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:57 crc kubenswrapper[5014]: E0228 04:34:57.477067 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:57 crc kubenswrapper[5014]: E0228 04:34:57.577691 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:57 crc kubenswrapper[5014]: E0228 04:34:57.678286 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:57 crc kubenswrapper[5014]: E0228 04:34:57.779125 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:57 crc kubenswrapper[5014]: E0228 04:34:57.880124 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:57 crc kubenswrapper[5014]: E0228 04:34:57.980285 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:58 crc kubenswrapper[5014]: E0228 04:34:58.081021 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:58 crc kubenswrapper[5014]: E0228 04:34:58.182005 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:58 crc kubenswrapper[5014]: E0228 04:34:58.282833 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:58 crc kubenswrapper[5014]: E0228 04:34:58.383875 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:58 crc kubenswrapper[5014]: E0228 04:34:58.484796 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:58 crc kubenswrapper[5014]: E0228 04:34:58.585597 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:58 crc kubenswrapper[5014]: E0228 04:34:58.686206 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:58 crc kubenswrapper[5014]: E0228 04:34:58.786423 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:58 crc kubenswrapper[5014]: E0228 04:34:58.886877 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:58 crc kubenswrapper[5014]: E0228 04:34:58.987966 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:59 crc kubenswrapper[5014]: E0228 04:34:59.088768 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:59 crc kubenswrapper[5014]: E0228 04:34:59.189314 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:59 crc kubenswrapper[5014]: E0228 04:34:59.290604 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:59 crc kubenswrapper[5014]: E0228 04:34:59.391112 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:59 crc kubenswrapper[5014]: E0228 04:34:59.492162 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:59 crc kubenswrapper[5014]: E0228 04:34:59.593057 5014 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 28 04:34:59 crc kubenswrapper[5014]: I0228 04:34:59.686926 5014 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 28 04:34:59 crc kubenswrapper[5014]: I0228 04:34:59.695249 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:59 crc kubenswrapper[5014]: I0228 04:34:59.695278 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:59 crc kubenswrapper[5014]: I0228 04:34:59.695286 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:59 crc kubenswrapper[5014]: I0228 04:34:59.695301 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:34:59 crc kubenswrapper[5014]: I0228 04:34:59.695310 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:34:59Z","lastTransitionTime":"2026-02-28T04:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:34:59 crc kubenswrapper[5014]: I0228 04:34:59.797752 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:59 crc kubenswrapper[5014]: I0228 04:34:59.797862 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:59 crc kubenswrapper[5014]: I0228 04:34:59.797884 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:59 crc kubenswrapper[5014]: I0228 04:34:59.797909 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:34:59 crc kubenswrapper[5014]: I0228 04:34:59.797929 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:34:59Z","lastTransitionTime":"2026-02-28T04:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:34:59 crc kubenswrapper[5014]: I0228 04:34:59.900586 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:34:59 crc kubenswrapper[5014]: I0228 04:34:59.900887 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:34:59 crc kubenswrapper[5014]: I0228 04:34:59.900976 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:34:59 crc kubenswrapper[5014]: I0228 04:34:59.901101 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:34:59 crc kubenswrapper[5014]: I0228 04:34:59.901190 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:34:59Z","lastTransitionTime":"2026-02-28T04:34:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.003620 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.003679 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.003698 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.003723 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.003743 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:00Z","lastTransitionTime":"2026-02-28T04:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.106448 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.106519 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.106532 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.106550 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.106563 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:00Z","lastTransitionTime":"2026-02-28T04:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.138979 5014 apiserver.go:52] "Watching apiserver" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.145574 5014 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.145924 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g"] Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.146388 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.146411 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.146487 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.147083 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.147135 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.147627 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.147838 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.148039 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.148729 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.149208 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.149242 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.149438 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.151160 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.151274 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.151365 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.151682 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.151697 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.151791 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.183582 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.197915 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.205349 5014 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.209552 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.209760 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.209867 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.209982 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.210073 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:00Z","lastTransitionTime":"2026-02-28T04:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.211720 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.222842 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.234415 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.249401 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.259655 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.270368 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.273519 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.273713 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.273798 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.273942 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.274011 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.274090 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.274185 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.274261 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.274338 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.274402 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.274476 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.274542 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.274613 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.274721 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.274836 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.274924 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.274992 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.275062 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.275158 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.275242 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.275504 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.275581 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.275650 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.275725 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.275793 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.274109 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.274299 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.274565 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.274775 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.274748 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.274925 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.275064 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.275084 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.275216 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.275474 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.275660 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.275647 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.275853 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276250 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.275885 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276280 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276284 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276318 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276331 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276341 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276336 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276361 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276378 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276393 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276475 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276501 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276515 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276530 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276544 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276559 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276575 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276590 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276605 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276623 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276638 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276653 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276669 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276684 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276703 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276717 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276732 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276747 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276763 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276778 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276795 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276835 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276852 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276868 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276885 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276900 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276916 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276931 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276948 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276963 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276980 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.276995 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277011 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277028 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277044 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277060 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277076 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277091 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277107 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277122 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277136 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277151 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277167 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277183 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277198 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277214 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277230 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277222 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277247 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277314 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277352 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277370 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277423 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277439 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277477 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277513 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277527 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277581 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277577 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277614 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277630 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277647 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277663 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277680 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277708 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277724 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277739 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277757 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277774 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277789 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277820 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277836 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277850 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277864 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277879 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277894 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277909 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277916 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277927 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.277986 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278023 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278054 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278086 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278118 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278149 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278214 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278247 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278278 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278308 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278339 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278372 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278402 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278434 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278467 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278504 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278535 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278565 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278596 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278628 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278661 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278693 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278727 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278760 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278792 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278856 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278889 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278923 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278955 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278987 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279019 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279052 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279090 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279124 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279158 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279193 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279228 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279257 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279288 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279328 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279362 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279397 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279429 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279462 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279492 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279522 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279551 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279582 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279613 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279644 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279676 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279712 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279745 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279780 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279840 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279875 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279909 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279943 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279992 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.280028 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.280062 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.280095 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.280131 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.280160 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.280185 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.280207 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.280230 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.280264 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.280296 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.280332 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.280363 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.280395 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.280432 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.280470 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.280505 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.280543 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278086 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278236 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.283877 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278361 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.278765 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279219 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279608 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279638 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279670 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279677 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.279886 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.280269 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.280301 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.280386 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.280431 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.280567 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:35:00.780540359 +0000 UTC m=+89.450666469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.280616 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.280630 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.281287 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.281415 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.280881 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.282082 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.282116 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.282271 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.282377 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.282594 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.282705 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.282987 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.282947 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.283005 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.283017 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.283121 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.283602 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.283694 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.283720 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.283794 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.283750 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.283733 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.284094 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.283752 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.284167 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.284725 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.284794 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.285069 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.285390 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.285893 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.286058 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.285968 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.286486 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.286797 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.286979 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.287067 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.287107 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.287135 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.287171 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.287197 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.287366 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.287429 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.287473 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.287508 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.287538 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.287568 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.287597 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.287630 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.287665 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.287698 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.287727 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.287757 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.287789 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.287835 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.287869 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.287876 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.288112 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.288173 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.288216 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.288228 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.288229 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.288261 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.288324 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.288369 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.288403 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.288452 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.288491 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.288538 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.288574 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.288619 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.288751 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.288917 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.289130 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.289217 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.289266 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.288894 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.289555 5014 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.289636 5014 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.289664 5014 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.289685 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.289704 5014 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.289735 5014 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.289753 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.289773 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.289792 5014 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.289832 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.289851 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.289869 5014 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.289886 5014 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.289907 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.289926 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.289944 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.289963 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.289983 5014 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.289999 5014 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290017 5014 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290034 5014 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290052 5014 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290069 5014 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290088 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290104 5014 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290121 5014 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290138 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290157 5014 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290200 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290217 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290233 5014 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290249 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290264 5014 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290280 5014 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290297 5014 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290314 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290331 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290349 5014 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290365 5014 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290381 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290397 5014 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290413 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290429 5014 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290444 5014 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290460 5014 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290476 5014 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290492 5014 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290511 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290529 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290546 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290563 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290580 5014 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290596 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290615 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290632 5014 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290649 5014 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290665 5014 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290682 5014 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290698 5014 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290715 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290733 5014 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290751 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290768 5014 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290786 5014 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290802 5014 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290843 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290859 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290877 5014 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290892 5014 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290910 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290926 5014 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290943 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290960 5014 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290976 5014 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.291033 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.291050 5014 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.291066 5014 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.291086 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.291107 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.291125 5014 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.291143 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.289660 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.289674 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.289841 5014 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.291254 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 04:35:00.79123301 +0000 UTC m=+89.461359090 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290035 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.290117 5014 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.291565 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 04:35:00.791549508 +0000 UTC m=+89.461675618 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290274 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290615 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290646 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290747 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290692 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290762 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.290911 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.291687 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.291763 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.292166 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.292212 5014 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.292328 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.292589 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.292796 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.292956 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.293020 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.293422 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.293433 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.293723 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.293892 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.292058 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.294474 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.294649 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.295230 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.295680 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.296072 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.296223 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.296454 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.296696 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.296737 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.297047 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.298411 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.298874 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.299113 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.299332 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.299508 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.299653 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.300160 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.300456 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.300519 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.300579 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.300964 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.301031 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.301212 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.303313 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.303330 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.303389 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.303666 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.303668 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.303784 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.303938 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.304079 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.304097 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.304143 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.304400 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.304688 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.304783 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.304946 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.305040 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.305236 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.291870 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.305269 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.305507 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.305656 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.305740 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.305759 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.305975 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.306087 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.306318 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.306477 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.306827 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.306845 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.306824 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.291167 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.307117 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.307170 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.307208 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.307249 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.308572 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.308594 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.308607 5014 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.308656 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 04:35:00.808639803 +0000 UTC m=+89.478765713 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.308728 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.308766 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.309044 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.309094 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.312641 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.312738 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.312752 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.312760 5014 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.312786 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 04:35:00.812778046 +0000 UTC m=+89.482903956 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.313131 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.313154 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.313165 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.313181 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.313194 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:00Z","lastTransitionTime":"2026-02-28T04:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.314911 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.314995 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.315005 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.315206 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.315355 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.315723 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.315882 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.316044 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.317646 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.318234 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.323387 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.323824 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.324735 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.324927 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.325041 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.325254 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.325415 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.325420 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.325528 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.325473 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.325702 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.325993 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.326077 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.326332 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.328532 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.328536 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.333134 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.340513 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.347066 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.350998 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.351575 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.391714 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.391760 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.391857 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.391857 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.391875 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.391887 5014 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.391897 5014 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.391908 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.391918 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.391929 5014 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.391939 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.391951 5014 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.391961 5014 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.391965 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.391972 5014 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.391983 5014 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.391994 5014 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392005 5014 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392017 5014 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392028 5014 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392051 5014 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392063 5014 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392074 5014 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392085 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392097 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392108 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392118 5014 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392130 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392140 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392151 5014 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392162 5014 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392173 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392183 5014 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392194 5014 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392204 5014 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392214 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392225 5014 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392235 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392244 5014 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392253 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392264 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392274 5014 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392285 5014 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392296 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392305 5014 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392315 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392327 5014 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392339 5014 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392349 5014 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392360 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392370 5014 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392380 5014 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392394 5014 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392406 5014 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392417 5014 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392429 5014 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392441 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392452 5014 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392463 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392475 5014 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392487 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392498 5014 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392508 5014 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392518 5014 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392529 5014 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392541 5014 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392552 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392563 5014 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392575 5014 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392588 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392600 5014 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392611 5014 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392622 5014 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392632 5014 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392643 5014 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392654 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392665 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392677 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392688 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392699 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392711 5014 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392723 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392734 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392744 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392756 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392766 5014 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392775 5014 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392786 5014 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392796 5014 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392824 5014 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392836 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392847 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392858 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392869 5014 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392879 5014 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392888 5014 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392898 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392909 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392919 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392943 5014 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392955 5014 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392967 5014 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392978 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392989 5014 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.392999 5014 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.393010 5014 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.393022 5014 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.393033 5014 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.393043 5014 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.393053 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.393064 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.393074 5014 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.393085 5014 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.393097 5014 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.393107 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.393118 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.393128 5014 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.415365 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.415433 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.415459 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.415479 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.415491 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:00Z","lastTransitionTime":"2026-02-28T04:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.462250 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.468130 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.475860 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.483381 5014 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 04:35:00 crc kubenswrapper[5014]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 28 04:35:00 crc kubenswrapper[5014]: set -o allexport Feb 28 04:35:00 crc kubenswrapper[5014]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 28 04:35:00 crc kubenswrapper[5014]: source /etc/kubernetes/apiserver-url.env Feb 28 04:35:00 crc kubenswrapper[5014]: else Feb 28 04:35:00 crc kubenswrapper[5014]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 28 04:35:00 crc kubenswrapper[5014]: exit 1 Feb 28 04:35:00 crc kubenswrapper[5014]: fi Feb 28 04:35:00 crc kubenswrapper[5014]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 28 04:35:00 crc kubenswrapper[5014]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 04:35:00 crc kubenswrapper[5014]: > logger="UnhandledError" Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.484747 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.487204 5014 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 04:35:00 crc kubenswrapper[5014]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 28 04:35:00 crc kubenswrapper[5014]: if [[ -f "/env/_master" ]]; then Feb 28 04:35:00 crc kubenswrapper[5014]: set -o allexport Feb 28 04:35:00 crc kubenswrapper[5014]: source "/env/_master" Feb 28 04:35:00 crc kubenswrapper[5014]: set +o allexport Feb 28 04:35:00 crc kubenswrapper[5014]: fi Feb 28 04:35:00 crc kubenswrapper[5014]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 28 04:35:00 crc kubenswrapper[5014]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 28 04:35:00 crc kubenswrapper[5014]: ho_enable="--enable-hybrid-overlay" Feb 28 04:35:00 crc kubenswrapper[5014]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 28 04:35:00 crc kubenswrapper[5014]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 28 04:35:00 crc kubenswrapper[5014]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 28 04:35:00 crc kubenswrapper[5014]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 28 04:35:00 crc kubenswrapper[5014]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 28 04:35:00 crc kubenswrapper[5014]: --webhook-host=127.0.0.1 \ Feb 28 04:35:00 crc kubenswrapper[5014]: --webhook-port=9743 \ Feb 28 04:35:00 crc kubenswrapper[5014]: ${ho_enable} \ Feb 28 04:35:00 crc kubenswrapper[5014]: --enable-interconnect \ Feb 28 04:35:00 crc kubenswrapper[5014]: --disable-approver \ Feb 28 04:35:00 crc kubenswrapper[5014]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 28 04:35:00 crc kubenswrapper[5014]: --wait-for-kubernetes-api=200s \ Feb 28 04:35:00 crc kubenswrapper[5014]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 28 04:35:00 crc kubenswrapper[5014]: --loglevel="${LOGLEVEL}" Feb 28 04:35:00 crc kubenswrapper[5014]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 04:35:00 crc kubenswrapper[5014]: > logger="UnhandledError" Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.487704 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.488790 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.488919 5014 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 04:35:00 crc kubenswrapper[5014]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 28 04:35:00 crc kubenswrapper[5014]: if [[ -f "/env/_master" ]]; then Feb 28 04:35:00 crc kubenswrapper[5014]: set -o allexport Feb 28 04:35:00 crc kubenswrapper[5014]: source "/env/_master" Feb 28 04:35:00 crc kubenswrapper[5014]: set +o allexport Feb 28 04:35:00 crc kubenswrapper[5014]: fi Feb 28 04:35:00 crc kubenswrapper[5014]: Feb 28 04:35:00 crc kubenswrapper[5014]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 28 04:35:00 crc kubenswrapper[5014]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 28 04:35:00 crc kubenswrapper[5014]: --disable-webhook \ Feb 28 04:35:00 crc kubenswrapper[5014]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 28 04:35:00 crc kubenswrapper[5014]: --loglevel="${LOGLEVEL}" Feb 28 04:35:00 crc kubenswrapper[5014]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 04:35:00 crc kubenswrapper[5014]: > logger="UnhandledError" Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.490665 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.517258 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.517297 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.517306 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.517320 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.517330 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:00Z","lastTransitionTime":"2026-02-28T04:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.563443 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"50ce1787fbb2d2c31cacde6178e15712aee954c49d3ffb610d078b9b25271961"} Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.564578 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"0ca7fcf238262e41af9863ce0195556e709022e36c76ac1f69cefc24b105fceb"} Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.564968 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.565558 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"9d6933fe9e822c47787fc9d100966e4f20cf27d1191161c950a0b058982709b5"} Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.566078 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.566156 5014 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 04:35:00 crc kubenswrapper[5014]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 28 04:35:00 crc kubenswrapper[5014]: if [[ -f "/env/_master" ]]; then Feb 28 04:35:00 crc kubenswrapper[5014]: set -o allexport Feb 28 04:35:00 crc kubenswrapper[5014]: source "/env/_master" Feb 28 04:35:00 crc kubenswrapper[5014]: set +o allexport Feb 28 04:35:00 crc kubenswrapper[5014]: fi Feb 28 04:35:00 crc kubenswrapper[5014]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Feb 28 04:35:00 crc kubenswrapper[5014]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Feb 28 04:35:00 crc kubenswrapper[5014]: ho_enable="--enable-hybrid-overlay" Feb 28 04:35:00 crc kubenswrapper[5014]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Feb 28 04:35:00 crc kubenswrapper[5014]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Feb 28 04:35:00 crc kubenswrapper[5014]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Feb 28 04:35:00 crc kubenswrapper[5014]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 28 04:35:00 crc kubenswrapper[5014]: --webhook-cert-dir="/etc/webhook-cert" \ Feb 28 04:35:00 crc kubenswrapper[5014]: --webhook-host=127.0.0.1 \ Feb 28 04:35:00 crc kubenswrapper[5014]: --webhook-port=9743 \ Feb 28 04:35:00 crc kubenswrapper[5014]: ${ho_enable} \ Feb 28 04:35:00 crc kubenswrapper[5014]: --enable-interconnect \ Feb 28 04:35:00 crc kubenswrapper[5014]: --disable-approver \ Feb 28 04:35:00 crc kubenswrapper[5014]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Feb 28 04:35:00 crc kubenswrapper[5014]: --wait-for-kubernetes-api=200s \ Feb 28 04:35:00 crc kubenswrapper[5014]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Feb 28 04:35:00 crc kubenswrapper[5014]: --loglevel="${LOGLEVEL}" Feb 28 04:35:00 crc kubenswrapper[5014]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 04:35:00 crc kubenswrapper[5014]: > logger="UnhandledError" Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.567014 5014 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 04:35:00 crc kubenswrapper[5014]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 28 04:35:00 crc kubenswrapper[5014]: set -o allexport Feb 28 04:35:00 crc kubenswrapper[5014]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 28 04:35:00 crc kubenswrapper[5014]: source /etc/kubernetes/apiserver-url.env Feb 28 04:35:00 crc kubenswrapper[5014]: else Feb 28 04:35:00 crc kubenswrapper[5014]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 28 04:35:00 crc kubenswrapper[5014]: exit 1 Feb 28 04:35:00 crc kubenswrapper[5014]: fi Feb 28 04:35:00 crc kubenswrapper[5014]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 28 04:35:00 crc kubenswrapper[5014]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 04:35:00 crc kubenswrapper[5014]: > logger="UnhandledError" Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.568093 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.568494 5014 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 04:35:00 crc kubenswrapper[5014]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Feb 28 04:35:00 crc kubenswrapper[5014]: if [[ -f "/env/_master" ]]; then Feb 28 04:35:00 crc kubenswrapper[5014]: set -o allexport Feb 28 04:35:00 crc kubenswrapper[5014]: source "/env/_master" Feb 28 04:35:00 crc kubenswrapper[5014]: set +o allexport Feb 28 04:35:00 crc kubenswrapper[5014]: fi Feb 28 04:35:00 crc kubenswrapper[5014]: Feb 28 04:35:00 crc kubenswrapper[5014]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Feb 28 04:35:00 crc kubenswrapper[5014]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Feb 28 04:35:00 crc kubenswrapper[5014]: --disable-webhook \ Feb 28 04:35:00 crc kubenswrapper[5014]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Feb 28 04:35:00 crc kubenswrapper[5014]: --loglevel="${LOGLEVEL}" Feb 28 04:35:00 crc kubenswrapper[5014]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 04:35:00 crc kubenswrapper[5014]: > logger="UnhandledError" Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.569640 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.575528 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.584006 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.593923 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.604974 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.614039 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.618868 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.618903 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.618911 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.618929 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.618938 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:00Z","lastTransitionTime":"2026-02-28T04:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.622859 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.632596 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.641342 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.650135 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.657388 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.665757 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.675931 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.721017 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.721050 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.721060 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.721075 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.721086 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:00Z","lastTransitionTime":"2026-02-28T04:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.796020 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.796118 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:35:01.796099203 +0000 UTC m=+90.466225123 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.796162 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.796183 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.796276 5014 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.796277 5014 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.796312 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 04:35:01.796305479 +0000 UTC m=+90.466431389 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.796323 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 04:35:01.796318209 +0000 UTC m=+90.466444119 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.823376 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.823416 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.823425 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.823442 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.823454 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:00Z","lastTransitionTime":"2026-02-28T04:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.897193 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.897239 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.897336 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.897350 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.897361 5014 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.897408 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 04:35:01.897395476 +0000 UTC m=+90.567521386 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.897414 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.897453 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.897464 5014 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:00 crc kubenswrapper[5014]: E0228 04:35:00.897524 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 04:35:01.897503349 +0000 UTC m=+90.567629259 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.925608 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.925674 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.925688 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.925705 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:00 crc kubenswrapper[5014]: I0228 04:35:00.925720 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:00Z","lastTransitionTime":"2026-02-28T04:35:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.027508 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.027564 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.027579 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.027600 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.027615 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:01Z","lastTransitionTime":"2026-02-28T04:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.129128 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.129167 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.129175 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.129187 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.129196 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:01Z","lastTransitionTime":"2026-02-28T04:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.187758 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.231829 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.231872 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.231886 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.231903 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.231914 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:01Z","lastTransitionTime":"2026-02-28T04:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.334959 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.335002 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.335012 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.335025 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.335035 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:01Z","lastTransitionTime":"2026-02-28T04:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.437752 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.437826 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.437835 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.437853 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.437865 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:01Z","lastTransitionTime":"2026-02-28T04:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.540649 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.540733 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.540752 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.540781 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.540803 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:01Z","lastTransitionTime":"2026-02-28T04:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.643060 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.643098 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.643106 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.643119 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.643128 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:01Z","lastTransitionTime":"2026-02-28T04:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.744783 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.744846 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.744858 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.744876 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.744888 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:01Z","lastTransitionTime":"2026-02-28T04:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.806108 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.806184 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.806206 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:01 crc kubenswrapper[5014]: E0228 04:35:01.806294 5014 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 04:35:01 crc kubenswrapper[5014]: E0228 04:35:01.806334 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 04:35:03.806322584 +0000 UTC m=+92.476448494 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 04:35:01 crc kubenswrapper[5014]: E0228 04:35:01.806374 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:35:03.806344724 +0000 UTC m=+92.476470634 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:35:01 crc kubenswrapper[5014]: E0228 04:35:01.806450 5014 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 04:35:01 crc kubenswrapper[5014]: E0228 04:35:01.806490 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 04:35:03.806483638 +0000 UTC m=+92.476609548 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.847031 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.847075 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.847085 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.847102 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.847113 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:01Z","lastTransitionTime":"2026-02-28T04:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.906694 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.906790 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:01 crc kubenswrapper[5014]: E0228 04:35:01.906964 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 04:35:01 crc kubenswrapper[5014]: E0228 04:35:01.907003 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 04:35:01 crc kubenswrapper[5014]: E0228 04:35:01.907021 5014 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:01 crc kubenswrapper[5014]: E0228 04:35:01.907070 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 04:35:01 crc kubenswrapper[5014]: E0228 04:35:01.907097 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 04:35:03.907076082 +0000 UTC m=+92.577202173 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:01 crc kubenswrapper[5014]: E0228 04:35:01.907108 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 04:35:01 crc kubenswrapper[5014]: E0228 04:35:01.907135 5014 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:01 crc kubenswrapper[5014]: E0228 04:35:01.907221 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 04:35:03.907192186 +0000 UTC m=+92.577318136 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.949993 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.950041 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.950053 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.950071 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:01 crc kubenswrapper[5014]: I0228 04:35:01.950083 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:01Z","lastTransitionTime":"2026-02-28T04:35:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.052417 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.052463 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.052475 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.052493 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.052506 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:02Z","lastTransitionTime":"2026-02-28T04:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.155373 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.155434 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.155449 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.155468 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.155480 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:02Z","lastTransitionTime":"2026-02-28T04:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.170684 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.170739 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.170846 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:02 crc kubenswrapper[5014]: E0228 04:35:02.170939 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:02 crc kubenswrapper[5014]: E0228 04:35:02.171123 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:02 crc kubenswrapper[5014]: E0228 04:35:02.171282 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.174194 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.174901 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.176051 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.176642 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.177712 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.178212 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.178762 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.178761 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.179883 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.180615 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.181673 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.182257 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.183380 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.183984 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.184636 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.185768 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.189155 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.189892 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.190353 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.191024 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.191695 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.192253 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.193955 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.194657 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.195328 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.195746 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.196272 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.197533 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.198899 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.199418 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.200350 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.200944 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.201772 5014 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.201889 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.203443 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.204343 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.204736 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.207349 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.208251 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.209369 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.210211 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.211542 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.212175 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.212996 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.214180 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.215339 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.215937 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.217098 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.217919 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.219241 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.220065 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.221191 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.221773 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.222898 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.224335 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.225077 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.227096 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.247176 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.257416 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.257652 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.257748 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.257850 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.257948 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:02Z","lastTransitionTime":"2026-02-28T04:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.263648 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.275414 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.283286 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.307275 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.307459 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.307543 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.307636 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.307703 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:02Z","lastTransitionTime":"2026-02-28T04:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:02 crc kubenswrapper[5014]: E0228 04:35:02.318509 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.322010 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.322043 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.322051 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.322063 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.322073 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:02Z","lastTransitionTime":"2026-02-28T04:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:02 crc kubenswrapper[5014]: E0228 04:35:02.337261 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.341116 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.341220 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.341288 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.341358 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.341420 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:02Z","lastTransitionTime":"2026-02-28T04:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:02 crc kubenswrapper[5014]: E0228 04:35:02.350365 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.353912 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.353961 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.353973 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.353990 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.354001 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:02Z","lastTransitionTime":"2026-02-28T04:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:02 crc kubenswrapper[5014]: E0228 04:35:02.364004 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.367695 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.367756 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.367768 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.367785 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.367846 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:02Z","lastTransitionTime":"2026-02-28T04:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:02 crc kubenswrapper[5014]: E0228 04:35:02.380176 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:02 crc kubenswrapper[5014]: E0228 04:35:02.380440 5014 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.382252 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.382310 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.382324 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.382341 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.382353 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:02Z","lastTransitionTime":"2026-02-28T04:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.484996 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.485048 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.485059 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.485075 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.485089 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:02Z","lastTransitionTime":"2026-02-28T04:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.586916 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.586955 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.586966 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.586979 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.586990 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:02Z","lastTransitionTime":"2026-02-28T04:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.689857 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.689902 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.689917 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.689935 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.689946 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:02Z","lastTransitionTime":"2026-02-28T04:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.793163 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.793245 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.793270 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.793299 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.793321 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:02Z","lastTransitionTime":"2026-02-28T04:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.896647 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.896771 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.896789 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.896851 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:02 crc kubenswrapper[5014]: I0228 04:35:02.896871 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:02Z","lastTransitionTime":"2026-02-28T04:35:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:02.999970 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.000031 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.000045 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.000063 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.000079 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:03Z","lastTransitionTime":"2026-02-28T04:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.103067 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.103146 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.103164 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.103192 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.103210 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:03Z","lastTransitionTime":"2026-02-28T04:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.191286 5014 scope.go:117] "RemoveContainer" containerID="acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24" Feb 28 04:35:03 crc kubenswrapper[5014]: E0228 04:35:03.192036 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.191492 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.206235 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.206294 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.206307 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.206329 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.206346 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:03Z","lastTransitionTime":"2026-02-28T04:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.309650 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.309699 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.309709 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.309727 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.309748 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:03Z","lastTransitionTime":"2026-02-28T04:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.412220 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.412256 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.412265 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.412280 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.412290 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:03Z","lastTransitionTime":"2026-02-28T04:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.515694 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.515747 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.515757 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.515790 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.515825 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:03Z","lastTransitionTime":"2026-02-28T04:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.572610 5014 scope.go:117] "RemoveContainer" containerID="acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24" Feb 28 04:35:03 crc kubenswrapper[5014]: E0228 04:35:03.572955 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.618994 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.619291 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.619401 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.619499 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.619605 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:03Z","lastTransitionTime":"2026-02-28T04:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.722208 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.722622 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.722703 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.722776 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.722865 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:03Z","lastTransitionTime":"2026-02-28T04:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.826951 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.827005 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.827017 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.827037 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.827052 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:03Z","lastTransitionTime":"2026-02-28T04:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.844480 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.844575 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.844612 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:03 crc kubenswrapper[5014]: E0228 04:35:03.844740 5014 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 04:35:03 crc kubenswrapper[5014]: E0228 04:35:03.844798 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 04:35:07.844781923 +0000 UTC m=+96.514907833 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 04:35:03 crc kubenswrapper[5014]: E0228 04:35:03.845128 5014 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 04:35:03 crc kubenswrapper[5014]: E0228 04:35:03.845389 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 04:35:07.84535432 +0000 UTC m=+96.515480270 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 04:35:03 crc kubenswrapper[5014]: E0228 04:35:03.846038 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:35:07.846020928 +0000 UTC m=+96.516147078 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.929429 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.929472 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.929486 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.929503 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.929516 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:03Z","lastTransitionTime":"2026-02-28T04:35:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.945123 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:03 crc kubenswrapper[5014]: I0228 04:35:03.945204 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:03 crc kubenswrapper[5014]: E0228 04:35:03.945424 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 04:35:03 crc kubenswrapper[5014]: E0228 04:35:03.945479 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 04:35:03 crc kubenswrapper[5014]: E0228 04:35:03.945506 5014 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:03 crc kubenswrapper[5014]: E0228 04:35:03.945610 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 04:35:07.945577133 +0000 UTC m=+96.615703083 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:03 crc kubenswrapper[5014]: E0228 04:35:03.946064 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 04:35:03 crc kubenswrapper[5014]: E0228 04:35:03.946290 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 04:35:03 crc kubenswrapper[5014]: E0228 04:35:03.946430 5014 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:03 crc kubenswrapper[5014]: E0228 04:35:03.946613 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 04:35:07.946587871 +0000 UTC m=+96.616713881 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.031907 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.031936 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.031944 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.031958 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.031982 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:04Z","lastTransitionTime":"2026-02-28T04:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.134912 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.134964 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.134975 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.134992 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.135005 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:04Z","lastTransitionTime":"2026-02-28T04:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.170663 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.170752 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:04 crc kubenswrapper[5014]: E0228 04:35:04.170819 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.170885 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:04 crc kubenswrapper[5014]: E0228 04:35:04.171081 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:04 crc kubenswrapper[5014]: E0228 04:35:04.171247 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.237503 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.237619 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.237644 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.237669 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.237686 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:04Z","lastTransitionTime":"2026-02-28T04:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.340245 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.340284 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.340297 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.340313 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.340324 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:04Z","lastTransitionTime":"2026-02-28T04:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.442605 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.442642 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.442654 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.442671 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.442684 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:04Z","lastTransitionTime":"2026-02-28T04:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.545781 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.546037 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.546123 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.546301 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.546387 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:04Z","lastTransitionTime":"2026-02-28T04:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.649541 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.649585 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.649594 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.649616 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.649634 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:04Z","lastTransitionTime":"2026-02-28T04:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.753172 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.753222 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.753236 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.753253 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.753270 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:04Z","lastTransitionTime":"2026-02-28T04:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.855703 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.855770 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.855783 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.855801 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.855834 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:04Z","lastTransitionTime":"2026-02-28T04:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.958096 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.958144 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.958155 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.958168 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:04 crc kubenswrapper[5014]: I0228 04:35:04.958179 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:04Z","lastTransitionTime":"2026-02-28T04:35:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.061369 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.061417 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.061429 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.061448 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.061461 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:05Z","lastTransitionTime":"2026-02-28T04:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.164848 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.164909 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.164921 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.164940 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.164953 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:05Z","lastTransitionTime":"2026-02-28T04:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.267849 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.267915 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.267927 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.267942 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.267971 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:05Z","lastTransitionTime":"2026-02-28T04:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.370636 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.370688 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.370698 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.370716 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.370727 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:05Z","lastTransitionTime":"2026-02-28T04:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.472798 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.472860 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.472873 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.472889 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.472901 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:05Z","lastTransitionTime":"2026-02-28T04:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.575356 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.575423 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.575436 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.575474 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.575488 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:05Z","lastTransitionTime":"2026-02-28T04:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.678249 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.678290 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.678323 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.678342 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.678354 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:05Z","lastTransitionTime":"2026-02-28T04:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.781417 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.781468 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.781484 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.781509 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.781525 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:05Z","lastTransitionTime":"2026-02-28T04:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.884096 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.884149 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.884161 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.884179 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.884193 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:05Z","lastTransitionTime":"2026-02-28T04:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.986659 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.986693 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.986702 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.986717 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:05 crc kubenswrapper[5014]: I0228 04:35:05.986728 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:05Z","lastTransitionTime":"2026-02-28T04:35:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.089380 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.089433 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.089452 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.089476 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.089494 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:06Z","lastTransitionTime":"2026-02-28T04:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.171550 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.171633 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.171633 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:06 crc kubenswrapper[5014]: E0228 04:35:06.171771 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:06 crc kubenswrapper[5014]: E0228 04:35:06.171849 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:06 crc kubenswrapper[5014]: E0228 04:35:06.171913 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.191909 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.191966 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.191995 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.192008 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.192018 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:06Z","lastTransitionTime":"2026-02-28T04:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.295070 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.295145 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.295157 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.295174 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.295186 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:06Z","lastTransitionTime":"2026-02-28T04:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.397656 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.397705 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.397738 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.397757 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.397769 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:06Z","lastTransitionTime":"2026-02-28T04:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.500923 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.501007 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.501027 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.501056 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.501078 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:06Z","lastTransitionTime":"2026-02-28T04:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.604177 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.604249 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.604274 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.604311 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.604333 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:06Z","lastTransitionTime":"2026-02-28T04:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.706819 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.706877 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.706887 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.706905 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.706931 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:06Z","lastTransitionTime":"2026-02-28T04:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.809702 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.809781 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.809838 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.809878 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.809906 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:06Z","lastTransitionTime":"2026-02-28T04:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.913348 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.913404 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.913415 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.913439 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:06 crc kubenswrapper[5014]: I0228 04:35:06.913452 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:06Z","lastTransitionTime":"2026-02-28T04:35:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.016508 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.016641 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.016669 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.016704 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.016729 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:07Z","lastTransitionTime":"2026-02-28T04:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.120203 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.120257 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.120274 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.120300 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.120318 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:07Z","lastTransitionTime":"2026-02-28T04:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.223527 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.223595 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.223610 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.223632 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.223653 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:07Z","lastTransitionTime":"2026-02-28T04:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.326562 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.326609 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.326621 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.326637 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.326647 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:07Z","lastTransitionTime":"2026-02-28T04:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.429076 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.429123 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.429133 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.429152 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.429166 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:07Z","lastTransitionTime":"2026-02-28T04:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.532194 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.532265 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.532282 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.532310 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.532328 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:07Z","lastTransitionTime":"2026-02-28T04:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.635950 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.636017 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.636041 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.636068 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.636089 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:07Z","lastTransitionTime":"2026-02-28T04:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.737934 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.738007 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.738020 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.738035 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.738047 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:07Z","lastTransitionTime":"2026-02-28T04:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.840915 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.840964 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.840974 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.840989 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.841000 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:07Z","lastTransitionTime":"2026-02-28T04:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.878974 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.879133 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:07 crc kubenswrapper[5014]: E0228 04:35:07.879192 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:35:15.879159198 +0000 UTC m=+104.549285108 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.879271 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:07 crc kubenswrapper[5014]: E0228 04:35:07.879279 5014 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 04:35:07 crc kubenswrapper[5014]: E0228 04:35:07.879374 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 04:35:15.879346203 +0000 UTC m=+104.549472143 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 04:35:07 crc kubenswrapper[5014]: E0228 04:35:07.879491 5014 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 04:35:07 crc kubenswrapper[5014]: E0228 04:35:07.879598 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 04:35:15.879577389 +0000 UTC m=+104.549703299 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.943401 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.943450 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.943458 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.943474 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.943483 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:07Z","lastTransitionTime":"2026-02-28T04:35:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.980758 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:07 crc kubenswrapper[5014]: I0228 04:35:07.980859 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:07 crc kubenswrapper[5014]: E0228 04:35:07.981015 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 04:35:07 crc kubenswrapper[5014]: E0228 04:35:07.981037 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 04:35:07 crc kubenswrapper[5014]: E0228 04:35:07.981053 5014 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:07 crc kubenswrapper[5014]: E0228 04:35:07.981122 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 04:35:15.981104369 +0000 UTC m=+104.651230279 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:07 crc kubenswrapper[5014]: E0228 04:35:07.981115 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 04:35:07 crc kubenswrapper[5014]: E0228 04:35:07.981185 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 04:35:07 crc kubenswrapper[5014]: E0228 04:35:07.981204 5014 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:07 crc kubenswrapper[5014]: E0228 04:35:07.981281 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 04:35:15.981252913 +0000 UTC m=+104.651378983 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.046057 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.046125 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.046137 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.046160 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.046180 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:08Z","lastTransitionTime":"2026-02-28T04:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.149379 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.149438 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.149470 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.149498 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.149513 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:08Z","lastTransitionTime":"2026-02-28T04:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.170965 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.170973 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.171188 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:08 crc kubenswrapper[5014]: E0228 04:35:08.171260 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:08 crc kubenswrapper[5014]: E0228 04:35:08.171446 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:08 crc kubenswrapper[5014]: E0228 04:35:08.171603 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.191964 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.252647 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.252691 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.252707 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.252731 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.252750 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:08Z","lastTransitionTime":"2026-02-28T04:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.355205 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.355279 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.355297 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.355321 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.355340 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:08Z","lastTransitionTime":"2026-02-28T04:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.457746 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.457820 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.457833 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.457849 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.457860 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:08Z","lastTransitionTime":"2026-02-28T04:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.560355 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.560392 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.560403 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.560419 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.560431 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:08Z","lastTransitionTime":"2026-02-28T04:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.663211 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.663250 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.663263 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.663285 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.663306 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:08Z","lastTransitionTime":"2026-02-28T04:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.765847 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.765906 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.765924 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.765945 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.765956 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:08Z","lastTransitionTime":"2026-02-28T04:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.868790 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.868870 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.868884 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.868903 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.868917 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:08Z","lastTransitionTime":"2026-02-28T04:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.972151 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.972213 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.972232 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.972262 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:08 crc kubenswrapper[5014]: I0228 04:35:08.972286 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:08Z","lastTransitionTime":"2026-02-28T04:35:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.075467 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.075526 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.075542 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.075562 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.075581 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:09Z","lastTransitionTime":"2026-02-28T04:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.179110 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.179152 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.179166 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.179186 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.179202 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:09Z","lastTransitionTime":"2026-02-28T04:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.282512 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.282626 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.282646 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.282677 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.282708 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:09Z","lastTransitionTime":"2026-02-28T04:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.385327 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.385364 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.385374 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.385389 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.385413 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:09Z","lastTransitionTime":"2026-02-28T04:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.487425 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.487474 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.487485 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.487497 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.487508 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:09Z","lastTransitionTime":"2026-02-28T04:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.589484 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.589528 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.589537 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.589554 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.589563 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:09Z","lastTransitionTime":"2026-02-28T04:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.691624 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.691681 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.691689 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.691703 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.691712 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:09Z","lastTransitionTime":"2026-02-28T04:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.793831 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.793875 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.793883 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.793896 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.793904 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:09Z","lastTransitionTime":"2026-02-28T04:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.896005 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.896083 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.896095 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.896112 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.896126 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:09Z","lastTransitionTime":"2026-02-28T04:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.998834 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.998877 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.998886 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.998903 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:09 crc kubenswrapper[5014]: I0228 04:35:09.998913 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:09Z","lastTransitionTime":"2026-02-28T04:35:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.101319 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.101355 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.101367 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.101393 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.101407 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:10Z","lastTransitionTime":"2026-02-28T04:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.170939 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.170976 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.170937 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:10 crc kubenswrapper[5014]: E0228 04:35:10.171086 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:10 crc kubenswrapper[5014]: E0228 04:35:10.171159 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:10 crc kubenswrapper[5014]: E0228 04:35:10.171245 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.203407 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.203448 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.203458 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.203473 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.203484 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:10Z","lastTransitionTime":"2026-02-28T04:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.305395 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.305442 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.305452 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.305468 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.305478 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:10Z","lastTransitionTime":"2026-02-28T04:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.408338 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.408421 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.408435 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.408465 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.408477 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:10Z","lastTransitionTime":"2026-02-28T04:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.510460 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.510504 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.510514 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.510538 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.510548 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:10Z","lastTransitionTime":"2026-02-28T04:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.613428 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.613498 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.613515 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.613541 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.613559 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:10Z","lastTransitionTime":"2026-02-28T04:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.715229 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.715267 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.715279 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.715296 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.715309 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:10Z","lastTransitionTime":"2026-02-28T04:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.817686 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.817725 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.817735 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.817750 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.817760 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:10Z","lastTransitionTime":"2026-02-28T04:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.919974 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.920019 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.920029 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.920044 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:10 crc kubenswrapper[5014]: I0228 04:35:10.920054 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:10Z","lastTransitionTime":"2026-02-28T04:35:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.023401 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.023440 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.023450 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.023464 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.023474 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:11Z","lastTransitionTime":"2026-02-28T04:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.126767 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.126838 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.126848 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.126862 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.126872 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:11Z","lastTransitionTime":"2026-02-28T04:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.229435 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.229514 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.229532 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.229561 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.229579 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:11Z","lastTransitionTime":"2026-02-28T04:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.332301 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.332373 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.332385 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.332409 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.332423 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:11Z","lastTransitionTime":"2026-02-28T04:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.435760 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.435893 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.435920 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.435955 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.435980 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:11Z","lastTransitionTime":"2026-02-28T04:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.538692 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.538748 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.538763 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.538782 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.538795 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:11Z","lastTransitionTime":"2026-02-28T04:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.643497 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.643565 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.643584 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.643648 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.643668 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:11Z","lastTransitionTime":"2026-02-28T04:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.745594 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.746029 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.746121 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.746243 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.746320 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:11Z","lastTransitionTime":"2026-02-28T04:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.849826 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.850104 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.850167 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.850235 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.850305 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:11Z","lastTransitionTime":"2026-02-28T04:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.952986 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.953033 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.953044 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.953060 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:11 crc kubenswrapper[5014]: I0228 04:35:11.953072 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:11Z","lastTransitionTime":"2026-02-28T04:35:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.055949 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.056477 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.056581 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.056651 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.056740 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:12Z","lastTransitionTime":"2026-02-28T04:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.159578 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.159629 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.159638 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.159661 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.159673 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:12Z","lastTransitionTime":"2026-02-28T04:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.170942 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:12 crc kubenswrapper[5014]: E0228 04:35:12.171252 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.171291 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.171316 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:12 crc kubenswrapper[5014]: E0228 04:35:12.171693 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:12 crc kubenswrapper[5014]: E0228 04:35:12.171827 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:12 crc kubenswrapper[5014]: E0228 04:35:12.174304 5014 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 04:35:12 crc kubenswrapper[5014]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Feb 28 04:35:12 crc kubenswrapper[5014]: set -o allexport Feb 28 04:35:12 crc kubenswrapper[5014]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Feb 28 04:35:12 crc kubenswrapper[5014]: source /etc/kubernetes/apiserver-url.env Feb 28 04:35:12 crc kubenswrapper[5014]: else Feb 28 04:35:12 crc kubenswrapper[5014]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Feb 28 04:35:12 crc kubenswrapper[5014]: exit 1 Feb 28 04:35:12 crc kubenswrapper[5014]: fi Feb 28 04:35:12 crc kubenswrapper[5014]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Feb 28 04:35:12 crc kubenswrapper[5014]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Feb 28 04:35:12 crc kubenswrapper[5014]: > logger="UnhandledError" Feb 28 04:35:12 crc kubenswrapper[5014]: E0228 04:35:12.175415 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.191136 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.203239 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.213222 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.222046 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.229776 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.246090 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.257699 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.261699 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.261724 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.261733 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.261762 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.261780 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:12Z","lastTransitionTime":"2026-02-28T04:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.267027 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.277254 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.364218 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.364259 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.364269 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.364283 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.364294 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:12Z","lastTransitionTime":"2026-02-28T04:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.467186 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.467254 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.467328 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.467360 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.467394 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:12Z","lastTransitionTime":"2026-02-28T04:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.570251 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.570305 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.570319 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.570342 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.570355 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:12Z","lastTransitionTime":"2026-02-28T04:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.637050 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.637103 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.637116 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.637137 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.637149 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:12Z","lastTransitionTime":"2026-02-28T04:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:12 crc kubenswrapper[5014]: E0228 04:35:12.648713 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.653616 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.653660 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.653674 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.653695 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.653717 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:12Z","lastTransitionTime":"2026-02-28T04:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:12 crc kubenswrapper[5014]: E0228 04:35:12.665067 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.670245 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.670290 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.670303 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.670326 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.670341 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:12Z","lastTransitionTime":"2026-02-28T04:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:12 crc kubenswrapper[5014]: E0228 04:35:12.680837 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.686063 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.686121 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.686137 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.686159 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.686176 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:12Z","lastTransitionTime":"2026-02-28T04:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:12 crc kubenswrapper[5014]: E0228 04:35:12.699015 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.703253 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.703300 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.703331 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.703349 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.703361 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:12Z","lastTransitionTime":"2026-02-28T04:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:12 crc kubenswrapper[5014]: E0228 04:35:12.714702 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:12 crc kubenswrapper[5014]: E0228 04:35:12.714955 5014 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.717060 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.717107 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.717119 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.717134 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.717145 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:12Z","lastTransitionTime":"2026-02-28T04:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.819476 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.819507 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.819515 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.819528 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.819537 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:12Z","lastTransitionTime":"2026-02-28T04:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.922483 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.922556 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.922570 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.922596 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:12 crc kubenswrapper[5014]: I0228 04:35:12.922611 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:12Z","lastTransitionTime":"2026-02-28T04:35:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.024912 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.024964 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.024976 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.024993 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.025010 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:13Z","lastTransitionTime":"2026-02-28T04:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.128362 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.128412 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.128423 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.128438 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.128446 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:13Z","lastTransitionTime":"2026-02-28T04:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.230454 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.230483 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.230492 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.230505 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.230516 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:13Z","lastTransitionTime":"2026-02-28T04:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.333384 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.333475 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.333525 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.333549 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.333567 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:13Z","lastTransitionTime":"2026-02-28T04:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.436091 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.436120 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.436128 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.436161 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.436173 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:13Z","lastTransitionTime":"2026-02-28T04:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.539958 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.540080 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.540098 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.540123 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.540144 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:13Z","lastTransitionTime":"2026-02-28T04:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.644145 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.644219 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.644238 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.644279 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.644298 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:13Z","lastTransitionTime":"2026-02-28T04:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.746999 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.747035 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.747049 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.747064 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.747076 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:13Z","lastTransitionTime":"2026-02-28T04:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.850617 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.850676 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.850691 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.850714 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.850730 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:13Z","lastTransitionTime":"2026-02-28T04:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.954083 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.954142 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.954156 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.954184 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:13 crc kubenswrapper[5014]: I0228 04:35:13.954199 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:13Z","lastTransitionTime":"2026-02-28T04:35:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.057567 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.057611 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.057623 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.057643 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.057656 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:14Z","lastTransitionTime":"2026-02-28T04:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.160314 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.160391 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.160413 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.160440 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.160461 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:14Z","lastTransitionTime":"2026-02-28T04:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.171700 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.171739 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:14 crc kubenswrapper[5014]: E0228 04:35:14.171929 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.172071 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:14 crc kubenswrapper[5014]: E0228 04:35:14.173669 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:14 crc kubenswrapper[5014]: E0228 04:35:14.173964 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.262765 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.262818 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.262826 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.262840 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.262850 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:14Z","lastTransitionTime":"2026-02-28T04:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.365681 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.365728 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.365739 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.365756 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.365768 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:14Z","lastTransitionTime":"2026-02-28T04:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.468658 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.468717 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.468733 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.468751 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.468763 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:14Z","lastTransitionTime":"2026-02-28T04:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.572581 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.572636 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.572647 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.572666 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.572679 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:14Z","lastTransitionTime":"2026-02-28T04:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.676122 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.676186 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.676204 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.676230 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.676247 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:14Z","lastTransitionTime":"2026-02-28T04:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.682764 5014 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.780626 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.780693 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.780707 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.780727 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.780739 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:14Z","lastTransitionTime":"2026-02-28T04:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.885950 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.886013 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.886025 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.886048 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.886062 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:14Z","lastTransitionTime":"2026-02-28T04:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.988840 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.988879 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.988889 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.988906 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:14 crc kubenswrapper[5014]: I0228 04:35:14.988917 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:14Z","lastTransitionTime":"2026-02-28T04:35:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.025316 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-mpjds"] Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.025631 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mpjds" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.027702 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.027951 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.028086 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.046067 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.054655 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.061089 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.068564 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.078008 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.087001 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.094096 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.094152 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.094165 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.094183 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.094195 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:15Z","lastTransitionTime":"2026-02-28T04:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.098299 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.111892 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.127724 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.140667 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.147650 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9ee39668-c5e4-4da8-807d-a63d9591161c-hosts-file\") pod \"node-resolver-mpjds\" (UID: \"9ee39668-c5e4-4da8-807d-a63d9591161c\") " pod="openshift-dns/node-resolver-mpjds" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.147698 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b79t\" (UniqueName: \"kubernetes.io/projected/9ee39668-c5e4-4da8-807d-a63d9591161c-kube-api-access-4b79t\") pod \"node-resolver-mpjds\" (UID: \"9ee39668-c5e4-4da8-807d-a63d9591161c\") " pod="openshift-dns/node-resolver-mpjds" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.196910 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.196960 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.196978 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.196996 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.197011 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:15Z","lastTransitionTime":"2026-02-28T04:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.249276 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9ee39668-c5e4-4da8-807d-a63d9591161c-hosts-file\") pod \"node-resolver-mpjds\" (UID: \"9ee39668-c5e4-4da8-807d-a63d9591161c\") " pod="openshift-dns/node-resolver-mpjds" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.249328 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b79t\" (UniqueName: \"kubernetes.io/projected/9ee39668-c5e4-4da8-807d-a63d9591161c-kube-api-access-4b79t\") pod \"node-resolver-mpjds\" (UID: \"9ee39668-c5e4-4da8-807d-a63d9591161c\") " pod="openshift-dns/node-resolver-mpjds" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.249565 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9ee39668-c5e4-4da8-807d-a63d9591161c-hosts-file\") pod \"node-resolver-mpjds\" (UID: \"9ee39668-c5e4-4da8-807d-a63d9591161c\") " pod="openshift-dns/node-resolver-mpjds" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.268402 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b79t\" (UniqueName: \"kubernetes.io/projected/9ee39668-c5e4-4da8-807d-a63d9591161c-kube-api-access-4b79t\") pod \"node-resolver-mpjds\" (UID: \"9ee39668-c5e4-4da8-807d-a63d9591161c\") " pod="openshift-dns/node-resolver-mpjds" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.299158 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.299210 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.299226 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.299250 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.299269 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:15Z","lastTransitionTime":"2026-02-28T04:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.347450 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mpjds" Feb 28 04:35:15 crc kubenswrapper[5014]: W0228 04:35:15.363493 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9ee39668_c5e4_4da8_807d_a63d9591161c.slice/crio-2f1cc4019519be9f4ade08c77ec78d9d831f971aad68e5b1b493bdc8b9b51fd3 WatchSource:0}: Error finding container 2f1cc4019519be9f4ade08c77ec78d9d831f971aad68e5b1b493bdc8b9b51fd3: Status 404 returned error can't find the container with id 2f1cc4019519be9f4ade08c77ec78d9d831f971aad68e5b1b493bdc8b9b51fd3 Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.379736 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-cct62"] Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.380141 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.384786 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-8xzmq"] Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.385250 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.385779 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.388099 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.388336 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.388537 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.389910 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.390675 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.391257 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.391384 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.392314 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.393323 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.393988 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-lt2wh"] Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.397595 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.397995 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.399772 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.400215 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.405099 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.405132 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.405144 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.405159 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.405168 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:15Z","lastTransitionTime":"2026-02-28T04:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.409733 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.421583 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.436762 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.447348 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450500 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fac15347-d258-4af3-85ab-04ee49634e0a-cnibin\") pod \"multus-additional-cni-plugins-lt2wh\" (UID: \"fac15347-d258-4af3-85ab-04ee49634e0a\") " pod="openshift-multus/multus-additional-cni-plugins-lt2wh" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450549 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-host-run-netns\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450568 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c297n\" (UniqueName: \"kubernetes.io/projected/fac15347-d258-4af3-85ab-04ee49634e0a-kube-api-access-c297n\") pod \"multus-additional-cni-plugins-lt2wh\" (UID: \"fac15347-d258-4af3-85ab-04ee49634e0a\") " pod="openshift-multus/multus-additional-cni-plugins-lt2wh" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450587 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-host-run-k8s-cni-cncf-io\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450601 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-host-var-lib-cni-multus\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450626 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6aad0009-d904-48f8-8e30-82205907ece1-mcd-auth-proxy-config\") pod \"machine-config-daemon-cct62\" (UID: \"6aad0009-d904-48f8-8e30-82205907ece1\") " pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450640 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khzjr\" (UniqueName: \"kubernetes.io/projected/6aad0009-d904-48f8-8e30-82205907ece1-kube-api-access-khzjr\") pod \"machine-config-daemon-cct62\" (UID: \"6aad0009-d904-48f8-8e30-82205907ece1\") " pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450655 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-cnibin\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450669 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6aad0009-d904-48f8-8e30-82205907ece1-rootfs\") pod \"machine-config-daemon-cct62\" (UID: \"6aad0009-d904-48f8-8e30-82205907ece1\") " pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450683 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-os-release\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450701 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/08c35a73-dfa6-4097-beb4-3a6d4f419559-multus-daemon-config\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450724 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-host-var-lib-cni-bin\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450745 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-multus-cni-dir\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450760 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fac15347-d258-4af3-85ab-04ee49634e0a-system-cni-dir\") pod \"multus-additional-cni-plugins-lt2wh\" (UID: \"fac15347-d258-4af3-85ab-04ee49634e0a\") " pod="openshift-multus/multus-additional-cni-plugins-lt2wh" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450774 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-system-cni-dir\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450788 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-etc-kubernetes\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450803 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-host-run-multus-certs\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450835 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fac15347-d258-4af3-85ab-04ee49634e0a-os-release\") pod \"multus-additional-cni-plugins-lt2wh\" (UID: \"fac15347-d258-4af3-85ab-04ee49634e0a\") " pod="openshift-multus/multus-additional-cni-plugins-lt2wh" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450853 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fac15347-d258-4af3-85ab-04ee49634e0a-cni-binary-copy\") pod \"multus-additional-cni-plugins-lt2wh\" (UID: \"fac15347-d258-4af3-85ab-04ee49634e0a\") " pod="openshift-multus/multus-additional-cni-plugins-lt2wh" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450867 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6aad0009-d904-48f8-8e30-82205907ece1-proxy-tls\") pod \"machine-config-daemon-cct62\" (UID: \"6aad0009-d904-48f8-8e30-82205907ece1\") " pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450891 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-hostroot\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450909 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-multus-conf-dir\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450928 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qphnm\" (UniqueName: \"kubernetes.io/projected/08c35a73-dfa6-4097-beb4-3a6d4f419559-kube-api-access-qphnm\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450942 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fac15347-d258-4af3-85ab-04ee49634e0a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lt2wh\" (UID: \"fac15347-d258-4af3-85ab-04ee49634e0a\") " pod="openshift-multus/multus-additional-cni-plugins-lt2wh" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450958 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fac15347-d258-4af3-85ab-04ee49634e0a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lt2wh\" (UID: \"fac15347-d258-4af3-85ab-04ee49634e0a\") " pod="openshift-multus/multus-additional-cni-plugins-lt2wh" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450972 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/08c35a73-dfa6-4097-beb4-3a6d4f419559-cni-binary-copy\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.450988 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-multus-socket-dir-parent\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.451002 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-host-var-lib-kubelet\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.456214 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.463438 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.471634 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.479572 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.487178 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.497692 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.505233 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.510330 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.510366 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.510378 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.510396 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.510406 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:15Z","lastTransitionTime":"2026-02-28T04:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.516945 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.538108 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.545919 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.551476 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-host-run-netns\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.551523 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fac15347-d258-4af3-85ab-04ee49634e0a-cnibin\") pod \"multus-additional-cni-plugins-lt2wh\" (UID: \"fac15347-d258-4af3-85ab-04ee49634e0a\") " pod="openshift-multus/multus-additional-cni-plugins-lt2wh" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.551546 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-host-var-lib-cni-multus\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.551570 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c297n\" (UniqueName: \"kubernetes.io/projected/fac15347-d258-4af3-85ab-04ee49634e0a-kube-api-access-c297n\") pod \"multus-additional-cni-plugins-lt2wh\" (UID: \"fac15347-d258-4af3-85ab-04ee49634e0a\") " pod="openshift-multus/multus-additional-cni-plugins-lt2wh" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.551591 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-host-run-k8s-cni-cncf-io\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.551621 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6aad0009-d904-48f8-8e30-82205907ece1-mcd-auth-proxy-config\") pod \"machine-config-daemon-cct62\" (UID: \"6aad0009-d904-48f8-8e30-82205907ece1\") " pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.551644 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khzjr\" (UniqueName: \"kubernetes.io/projected/6aad0009-d904-48f8-8e30-82205907ece1-kube-api-access-khzjr\") pod \"machine-config-daemon-cct62\" (UID: \"6aad0009-d904-48f8-8e30-82205907ece1\") " pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.551689 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-cnibin\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.551712 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6aad0009-d904-48f8-8e30-82205907ece1-rootfs\") pod \"machine-config-daemon-cct62\" (UID: \"6aad0009-d904-48f8-8e30-82205907ece1\") " pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.551731 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/08c35a73-dfa6-4097-beb4-3a6d4f419559-multus-daemon-config\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.551752 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-os-release\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.551771 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-host-var-lib-cni-bin\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.551832 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-multus-cni-dir\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.551859 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fac15347-d258-4af3-85ab-04ee49634e0a-system-cni-dir\") pod \"multus-additional-cni-plugins-lt2wh\" (UID: \"fac15347-d258-4af3-85ab-04ee49634e0a\") " pod="openshift-multus/multus-additional-cni-plugins-lt2wh" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.551881 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-system-cni-dir\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.551900 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-etc-kubernetes\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.551921 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6aad0009-d904-48f8-8e30-82205907ece1-proxy-tls\") pod \"machine-config-daemon-cct62\" (UID: \"6aad0009-d904-48f8-8e30-82205907ece1\") " pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.551943 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-host-run-multus-certs\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.551964 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fac15347-d258-4af3-85ab-04ee49634e0a-os-release\") pod \"multus-additional-cni-plugins-lt2wh\" (UID: \"fac15347-d258-4af3-85ab-04ee49634e0a\") " pod="openshift-multus/multus-additional-cni-plugins-lt2wh" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.551996 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fac15347-d258-4af3-85ab-04ee49634e0a-cni-binary-copy\") pod \"multus-additional-cni-plugins-lt2wh\" (UID: \"fac15347-d258-4af3-85ab-04ee49634e0a\") " pod="openshift-multus/multus-additional-cni-plugins-lt2wh" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.552027 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-host-var-lib-kubelet\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.552047 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-hostroot\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.552070 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-host-run-netns\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.552070 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-multus-conf-dir\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.552120 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-multus-conf-dir\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.552148 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qphnm\" (UniqueName: \"kubernetes.io/projected/08c35a73-dfa6-4097-beb4-3a6d4f419559-kube-api-access-qphnm\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.552170 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fac15347-d258-4af3-85ab-04ee49634e0a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lt2wh\" (UID: \"fac15347-d258-4af3-85ab-04ee49634e0a\") " pod="openshift-multus/multus-additional-cni-plugins-lt2wh" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.552191 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fac15347-d258-4af3-85ab-04ee49634e0a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lt2wh\" (UID: \"fac15347-d258-4af3-85ab-04ee49634e0a\") " pod="openshift-multus/multus-additional-cni-plugins-lt2wh" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.552210 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/08c35a73-dfa6-4097-beb4-3a6d4f419559-cni-binary-copy\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.552228 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-multus-socket-dir-parent\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.552309 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-multus-socket-dir-parent\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.552334 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fac15347-d258-4af3-85ab-04ee49634e0a-cnibin\") pod \"multus-additional-cni-plugins-lt2wh\" (UID: \"fac15347-d258-4af3-85ab-04ee49634e0a\") " pod="openshift-multus/multus-additional-cni-plugins-lt2wh" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.552358 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-host-var-lib-cni-multus\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.552575 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-os-release\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.552652 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-host-run-k8s-cni-cncf-io\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.552697 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-host-var-lib-cni-bin\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.552973 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/08c35a73-dfa6-4097-beb4-3a6d4f419559-multus-daemon-config\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.553008 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-multus-cni-dir\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.553070 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fac15347-d258-4af3-85ab-04ee49634e0a-system-cni-dir\") pod \"multus-additional-cni-plugins-lt2wh\" (UID: \"fac15347-d258-4af3-85ab-04ee49634e0a\") " pod="openshift-multus/multus-additional-cni-plugins-lt2wh" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.553144 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-system-cni-dir\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.553252 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6aad0009-d904-48f8-8e30-82205907ece1-mcd-auth-proxy-config\") pod \"machine-config-daemon-cct62\" (UID: \"6aad0009-d904-48f8-8e30-82205907ece1\") " pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.552034 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6aad0009-d904-48f8-8e30-82205907ece1-rootfs\") pod \"machine-config-daemon-cct62\" (UID: \"6aad0009-d904-48f8-8e30-82205907ece1\") " pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.553105 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fac15347-d258-4af3-85ab-04ee49634e0a-os-release\") pod \"multus-additional-cni-plugins-lt2wh\" (UID: \"fac15347-d258-4af3-85ab-04ee49634e0a\") " pod="openshift-multus/multus-additional-cni-plugins-lt2wh" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.553546 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-host-run-multus-certs\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.553638 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-host-var-lib-kubelet\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.553666 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-hostroot\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.553720 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-etc-kubernetes\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.553719 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/08c35a73-dfa6-4097-beb4-3a6d4f419559-cnibin\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.554046 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fac15347-d258-4af3-85ab-04ee49634e0a-tuning-conf-dir\") pod \"multus-additional-cni-plugins-lt2wh\" (UID: \"fac15347-d258-4af3-85ab-04ee49634e0a\") " pod="openshift-multus/multus-additional-cni-plugins-lt2wh" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.555014 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fac15347-d258-4af3-85ab-04ee49634e0a-cni-binary-copy\") pod \"multus-additional-cni-plugins-lt2wh\" (UID: \"fac15347-d258-4af3-85ab-04ee49634e0a\") " pod="openshift-multus/multus-additional-cni-plugins-lt2wh" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.555150 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/08c35a73-dfa6-4097-beb4-3a6d4f419559-cni-binary-copy\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.555498 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/fac15347-d258-4af3-85ab-04ee49634e0a-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-lt2wh\" (UID: \"fac15347-d258-4af3-85ab-04ee49634e0a\") " pod="openshift-multus/multus-additional-cni-plugins-lt2wh" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.557213 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.558431 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6aad0009-d904-48f8-8e30-82205907ece1-proxy-tls\") pod \"machine-config-daemon-cct62\" (UID: \"6aad0009-d904-48f8-8e30-82205907ece1\") " pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.567202 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.570828 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c297n\" (UniqueName: \"kubernetes.io/projected/fac15347-d258-4af3-85ab-04ee49634e0a-kube-api-access-c297n\") pod \"multus-additional-cni-plugins-lt2wh\" (UID: \"fac15347-d258-4af3-85ab-04ee49634e0a\") " pod="openshift-multus/multus-additional-cni-plugins-lt2wh" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.571180 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khzjr\" (UniqueName: \"kubernetes.io/projected/6aad0009-d904-48f8-8e30-82205907ece1-kube-api-access-khzjr\") pod \"machine-config-daemon-cct62\" (UID: \"6aad0009-d904-48f8-8e30-82205907ece1\") " pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.582208 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qphnm\" (UniqueName: \"kubernetes.io/projected/08c35a73-dfa6-4097-beb4-3a6d4f419559-kube-api-access-qphnm\") pod \"multus-8xzmq\" (UID: \"08c35a73-dfa6-4097-beb4-3a6d4f419559\") " pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.582900 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.595006 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.603951 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322"} Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.604014 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12"} Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.605455 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mpjds" event={"ID":"9ee39668-c5e4-4da8-807d-a63d9591161c","Type":"ContainerStarted","Data":"572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689"} Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.605502 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mpjds" event={"ID":"9ee39668-c5e4-4da8-807d-a63d9591161c","Type":"ContainerStarted","Data":"2f1cc4019519be9f4ade08c77ec78d9d831f971aad68e5b1b493bdc8b9b51fd3"} Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.609447 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.612854 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.613066 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.613170 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.613280 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.613392 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:15Z","lastTransitionTime":"2026-02-28T04:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.623160 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.636987 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.649703 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.658597 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.668831 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.678538 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.690101 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.705400 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.714074 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.716707 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.716744 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.716754 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.716772 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.716785 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:15Z","lastTransitionTime":"2026-02-28T04:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.716972 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-8xzmq" Feb 28 04:35:15 crc kubenswrapper[5014]: W0228 04:35:15.719014 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6aad0009_d904_48f8_8e30_82205907ece1.slice/crio-b51401d4b2987e7a5c2d92be607840744a91e675832ef4faf10f9466ea05d6d3 WatchSource:0}: Error finding container b51401d4b2987e7a5c2d92be607840744a91e675832ef4faf10f9466ea05d6d3: Status 404 returned error can't find the container with id b51401d4b2987e7a5c2d92be607840744a91e675832ef4faf10f9466ea05d6d3 Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.724597 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.725546 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.739075 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: W0228 04:35:15.739982 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08c35a73_dfa6_4097_beb4_3a6d4f419559.slice/crio-4b010d4831aa17e7c47a1064a5caf11090a67ab67711cd8604d68b87aefb4ccc WatchSource:0}: Error finding container 4b010d4831aa17e7c47a1064a5caf11090a67ab67711cd8604d68b87aefb4ccc: Status 404 returned error can't find the container with id 4b010d4831aa17e7c47a1064a5caf11090a67ab67711cd8604d68b87aefb4ccc Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.751124 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.758921 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-62hnq"] Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.759723 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.763504 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.763613 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.763860 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.764027 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.764080 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.764207 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.764291 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.768427 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.785632 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.803703 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.820173 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.821284 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.821326 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.821337 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.821357 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.821370 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:15Z","lastTransitionTime":"2026-02-28T04:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.834343 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.843387 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.854829 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.855140 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-slash\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.855188 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp7g9\" (UniqueName: \"kubernetes.io/projected/faa5db1f-df50-492a-9d45-d5065bdc63d2-kube-api-access-vp7g9\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.855232 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-var-lib-openvswitch\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.855314 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-etc-openvswitch\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.855336 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-run-ovn\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.855352 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-node-log\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.855402 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.855431 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/faa5db1f-df50-492a-9d45-d5065bdc63d2-ovnkube-config\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.855480 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/faa5db1f-df50-492a-9d45-d5065bdc63d2-ovn-node-metrics-cert\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.855511 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-systemd-units\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.855549 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-run-netns\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.855572 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-cni-bin\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.855599 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/faa5db1f-df50-492a-9d45-d5065bdc63d2-env-overrides\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.855635 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-log-socket\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.855650 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-run-ovn-kubernetes\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.855664 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/faa5db1f-df50-492a-9d45-d5065bdc63d2-ovnkube-script-lib\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.855697 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-run-systemd\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.855713 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-run-openvswitch\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.855742 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-cni-netd\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.855775 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-kubelet\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.866257 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.878515 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.897667 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.906262 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.918396 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.923293 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.923331 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.923340 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.923356 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.923366 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:15Z","lastTransitionTime":"2026-02-28T04:35:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.931012 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.942552 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.956356 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:35:15 crc kubenswrapper[5014]: E0228 04:35:15.956528 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:35:31.956497179 +0000 UTC m=+120.626623089 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.956567 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/faa5db1f-df50-492a-9d45-d5065bdc63d2-ovn-node-metrics-cert\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.956605 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.956659 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-systemd-units\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.956688 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-run-netns\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.956712 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-cni-bin\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.956738 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/faa5db1f-df50-492a-9d45-d5065bdc63d2-env-overrides\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.956769 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-run-ovn-kubernetes\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.956796 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/faa5db1f-df50-492a-9d45-d5065bdc63d2-ovnkube-script-lib\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.956842 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-run-systemd\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.956869 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-log-socket\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.956891 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-run-openvswitch\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: E0228 04:35:15.956975 5014 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.956965 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-run-netns\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.956978 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-systemd-units\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.957015 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-cni-bin\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.957030 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-run-ovn-kubernetes\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.957039 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-run-systemd\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.957064 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-run-openvswitch\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.957143 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-log-socket\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: E0228 04:35:15.957782 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 04:35:31.957751453 +0000 UTC m=+120.627877363 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.958080 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/faa5db1f-df50-492a-9d45-d5065bdc63d2-ovnkube-script-lib\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.958087 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/faa5db1f-df50-492a-9d45-d5065bdc63d2-env-overrides\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.958854 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-cni-netd\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.959044 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-kubelet\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.959349 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-slash\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.959484 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp7g9\" (UniqueName: \"kubernetes.io/projected/faa5db1f-df50-492a-9d45-d5065bdc63d2-kube-api-access-vp7g9\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.959610 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.960053 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-var-lib-openvswitch\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.960142 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-var-lib-openvswitch\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.959143 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-kubelet\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.959269 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: E0228 04:35:15.959733 5014 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.960195 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-etc-openvswitch\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.960153 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-etc-openvswitch\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.959453 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-slash\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.959004 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-cni-netd\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.960852 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-run-ovn\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: E0228 04:35:15.961014 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 04:35:31.960997542 +0000 UTC m=+120.631123442 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.960640 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-run-ovn\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.963035 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-node-log\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.963165 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.963262 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/faa5db1f-df50-492a-9d45-d5065bdc63d2-ovnkube-config\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.964363 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-node-log\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.964425 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.971400 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/faa5db1f-df50-492a-9d45-d5065bdc63d2-ovnkube-config\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.972375 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.983283 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.991074 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp7g9\" (UniqueName: \"kubernetes.io/projected/faa5db1f-df50-492a-9d45-d5065bdc63d2-kube-api-access-vp7g9\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.994119 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/faa5db1f-df50-492a-9d45-d5065bdc63d2-ovn-node-metrics-cert\") pod \"ovnkube-node-62hnq\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:15 crc kubenswrapper[5014]: I0228 04:35:15.999214 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.012744 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.022633 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.030398 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.030433 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.030442 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.030457 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.030469 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:16Z","lastTransitionTime":"2026-02-28T04:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.065117 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.065191 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:16 crc kubenswrapper[5014]: E0228 04:35:16.065360 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 04:35:16 crc kubenswrapper[5014]: E0228 04:35:16.065377 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 04:35:16 crc kubenswrapper[5014]: E0228 04:35:16.065391 5014 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:16 crc kubenswrapper[5014]: E0228 04:35:16.065414 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 04:35:16 crc kubenswrapper[5014]: E0228 04:35:16.065474 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 04:35:16 crc kubenswrapper[5014]: E0228 04:35:16.065533 5014 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:16 crc kubenswrapper[5014]: E0228 04:35:16.065455 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 04:35:32.065437301 +0000 UTC m=+120.735563211 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:16 crc kubenswrapper[5014]: E0228 04:35:16.065629 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 04:35:32.065606895 +0000 UTC m=+120.735732805 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.089023 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.134309 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.134351 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.134362 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.134381 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.134394 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:16Z","lastTransitionTime":"2026-02-28T04:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.171626 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.171709 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.171750 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:16 crc kubenswrapper[5014]: E0228 04:35:16.171884 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:16 crc kubenswrapper[5014]: E0228 04:35:16.172032 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:16 crc kubenswrapper[5014]: E0228 04:35:16.172205 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.236976 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.237017 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.237029 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.237045 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.237056 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:16Z","lastTransitionTime":"2026-02-28T04:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.340484 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.340540 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.340552 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.340572 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.340586 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:16Z","lastTransitionTime":"2026-02-28T04:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.444691 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.444737 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.444747 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.444765 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.444776 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:16Z","lastTransitionTime":"2026-02-28T04:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.547343 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.547414 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.547429 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.547447 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.547480 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:16Z","lastTransitionTime":"2026-02-28T04:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.609249 5014 generic.go:334] "Generic (PLEG): container finished" podID="fac15347-d258-4af3-85ab-04ee49634e0a" containerID="ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9" exitCode=0 Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.609319 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" event={"ID":"fac15347-d258-4af3-85ab-04ee49634e0a","Type":"ContainerDied","Data":"ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9"} Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.609511 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" event={"ID":"fac15347-d258-4af3-85ab-04ee49634e0a","Type":"ContainerStarted","Data":"8d34e4e8f534ce85afb4b5784c466f3fe795fafa5995e4e80ee7c70b55e14392"} Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.610342 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8xzmq" event={"ID":"08c35a73-dfa6-4097-beb4-3a6d4f419559","Type":"ContainerStarted","Data":"591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c"} Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.610375 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8xzmq" event={"ID":"08c35a73-dfa6-4097-beb4-3a6d4f419559","Type":"ContainerStarted","Data":"4b010d4831aa17e7c47a1064a5caf11090a67ab67711cd8604d68b87aefb4ccc"} Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.613801 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerStarted","Data":"0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60"} Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.613851 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerStarted","Data":"40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550"} Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.613864 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerStarted","Data":"b51401d4b2987e7a5c2d92be607840744a91e675832ef4faf10f9466ea05d6d3"} Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.615353 5014 generic.go:334] "Generic (PLEG): container finished" podID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerID="d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0" exitCode=0 Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.615439 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerDied","Data":"d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0"} Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.615509 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerStarted","Data":"0c1b6d6f056d4cfc0fc21116705d96feffb9e30ec1e9a6383f4adcb16d2de01a"} Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.623423 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:16Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.632986 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:16Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.644983 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:16Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.649889 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.649956 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.649973 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.649995 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.650017 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:16Z","lastTransitionTime":"2026-02-28T04:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.667249 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:16Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.686975 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:16Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.698535 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:16Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.710396 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:16Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.723401 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:16Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.735742 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:16Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.751951 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:16Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.763139 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:16Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.768105 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.768134 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.768143 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.768157 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.768166 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:16Z","lastTransitionTime":"2026-02-28T04:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.779653 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:16Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.793645 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:16Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.802618 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:16Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.813604 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:16Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.824348 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:16Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.843191 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:16Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.854131 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:16Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.869721 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:16Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.871177 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.871218 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.871228 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.871244 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.871254 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:16Z","lastTransitionTime":"2026-02-28T04:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.879011 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:16Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.889957 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:16Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.901153 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:16Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.941929 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:16Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.973431 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.973463 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.973470 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.973483 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.973492 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:16Z","lastTransitionTime":"2026-02-28T04:35:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:16 crc kubenswrapper[5014]: I0228 04:35:16.977127 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:16Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.019632 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:17Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.068391 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:17Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.076358 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.076392 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.076400 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.076418 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.076430 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:17Z","lastTransitionTime":"2026-02-28T04:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.108318 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:17Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.167985 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:17Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.172018 5014 scope.go:117] "RemoveContainer" containerID="acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24" Feb 28 04:35:17 crc kubenswrapper[5014]: E0228 04:35:17.172246 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.180323 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.180372 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.180384 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.180404 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.180418 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:17Z","lastTransitionTime":"2026-02-28T04:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.282996 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.283041 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.283053 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.283068 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.283077 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:17Z","lastTransitionTime":"2026-02-28T04:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.385352 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.385416 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.385430 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.385450 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.385464 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:17Z","lastTransitionTime":"2026-02-28T04:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.488027 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.488062 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.488071 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.488085 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.488093 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:17Z","lastTransitionTime":"2026-02-28T04:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.591144 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.591557 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.591566 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.591581 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.591591 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:17Z","lastTransitionTime":"2026-02-28T04:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.627355 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerStarted","Data":"2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63"} Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.627402 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerStarted","Data":"b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013"} Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.627411 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerStarted","Data":"6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388"} Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.627419 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerStarted","Data":"6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c"} Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.628885 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" event={"ID":"fac15347-d258-4af3-85ab-04ee49634e0a","Type":"ContainerStarted","Data":"6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10"} Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.642966 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:17Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.655189 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:17Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.668071 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:17Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.682026 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:17Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.694050 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.694081 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.694091 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.694104 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.694118 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:17Z","lastTransitionTime":"2026-02-28T04:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.696709 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:17Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.715346 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:17Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.725702 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:17Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.739075 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:17Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.752020 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:17Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.766705 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:17Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.781786 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:17Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.796790 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.796845 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.796857 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.796872 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.796883 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:17Z","lastTransitionTime":"2026-02-28T04:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.798513 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:17Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.811871 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:17Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.831576 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:17Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.898884 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.898922 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.898933 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.898953 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:17 crc kubenswrapper[5014]: I0228 04:35:17.898964 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:17Z","lastTransitionTime":"2026-02-28T04:35:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.001914 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.001973 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.001986 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.002004 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.002017 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:18Z","lastTransitionTime":"2026-02-28T04:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.104241 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.104270 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.104280 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.104293 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.104302 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:18Z","lastTransitionTime":"2026-02-28T04:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.171019 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:18 crc kubenswrapper[5014]: E0228 04:35:18.171137 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.171477 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:18 crc kubenswrapper[5014]: E0228 04:35:18.171526 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.171756 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:18 crc kubenswrapper[5014]: E0228 04:35:18.171890 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.207498 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.207927 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.207936 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.207951 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.207961 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:18Z","lastTransitionTime":"2026-02-28T04:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.311652 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.311691 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.311702 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.311716 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.311727 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:18Z","lastTransitionTime":"2026-02-28T04:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.414432 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.414476 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.414484 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.414500 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.414508 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:18Z","lastTransitionTime":"2026-02-28T04:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.517071 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.517393 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.517479 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.517541 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.517606 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:18Z","lastTransitionTime":"2026-02-28T04:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.620147 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.620414 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.620510 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.620585 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.620641 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:18Z","lastTransitionTime":"2026-02-28T04:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.633597 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396"} Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.636030 5014 generic.go:334] "Generic (PLEG): container finished" podID="fac15347-d258-4af3-85ab-04ee49634e0a" containerID="6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10" exitCode=0 Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.636105 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" event={"ID":"fac15347-d258-4af3-85ab-04ee49634e0a","Type":"ContainerDied","Data":"6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10"} Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.645596 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerStarted","Data":"3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331"} Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.645644 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerStarted","Data":"4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c"} Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.659496 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.677277 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.695030 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.711985 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.723108 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.723145 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.723155 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.723172 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.723182 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:18Z","lastTransitionTime":"2026-02-28T04:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.726845 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.741197 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.759910 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.771551 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.781323 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.792638 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.806497 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.825447 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.825491 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.825502 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.825519 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.825531 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:18Z","lastTransitionTime":"2026-02-28T04:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.826290 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.836691 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.848243 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.860226 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.873156 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.892236 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.901113 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.910880 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.919365 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.928351 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.928393 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.928403 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.928420 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.928431 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:18Z","lastTransitionTime":"2026-02-28T04:35:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.932716 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.944574 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.958137 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.971143 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:18 crc kubenswrapper[5014]: I0228 04:35:18.985458 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:18.999939 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:18Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.011798 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:19Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.028149 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:19Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.030603 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.030648 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.030663 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.030685 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.030697 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:19Z","lastTransitionTime":"2026-02-28T04:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.133347 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.133381 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.133393 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.133409 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.133421 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:19Z","lastTransitionTime":"2026-02-28T04:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.235601 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.235649 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.235660 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.235678 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.235689 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:19Z","lastTransitionTime":"2026-02-28T04:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.338349 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.338409 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.338421 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.338443 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.338457 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:19Z","lastTransitionTime":"2026-02-28T04:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.440550 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.440585 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.440594 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.440609 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.440618 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:19Z","lastTransitionTime":"2026-02-28T04:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.542779 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.542853 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.542864 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.542877 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.542886 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:19Z","lastTransitionTime":"2026-02-28T04:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.647112 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.647180 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.647193 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.647213 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.647228 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:19Z","lastTransitionTime":"2026-02-28T04:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.652530 5014 generic.go:334] "Generic (PLEG): container finished" podID="fac15347-d258-4af3-85ab-04ee49634e0a" containerID="9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2" exitCode=0 Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.652656 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" event={"ID":"fac15347-d258-4af3-85ab-04ee49634e0a","Type":"ContainerDied","Data":"9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2"} Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.670725 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:19Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.683303 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:19Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.695157 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:19Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.709074 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:19Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.725837 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:19Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.746036 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:19Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.749755 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.749847 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.749872 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.749903 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.749925 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:19Z","lastTransitionTime":"2026-02-28T04:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.756626 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:19Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.767915 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:19Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.780456 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:19Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.792084 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:19Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.803641 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:19Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.815209 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:19Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.827188 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:19Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.844333 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:19Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.851932 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.851965 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.851973 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.851988 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.851996 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:19Z","lastTransitionTime":"2026-02-28T04:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.954235 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.954275 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.954285 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.954298 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:19 crc kubenswrapper[5014]: I0228 04:35:19.954309 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:19Z","lastTransitionTime":"2026-02-28T04:35:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.056952 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.056994 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.057007 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.057025 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.057037 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:20Z","lastTransitionTime":"2026-02-28T04:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.159685 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.159737 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.159752 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.159770 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.159787 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:20Z","lastTransitionTime":"2026-02-28T04:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.171264 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.171348 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.171453 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:20 crc kubenswrapper[5014]: E0228 04:35:20.171447 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:20 crc kubenswrapper[5014]: E0228 04:35:20.171582 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:20 crc kubenswrapper[5014]: E0228 04:35:20.171711 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.262692 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.262741 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.262752 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.262771 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.262783 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:20Z","lastTransitionTime":"2026-02-28T04:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.364826 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.364866 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.364879 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.364896 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.364913 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:20Z","lastTransitionTime":"2026-02-28T04:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.467348 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.467415 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.467426 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.467442 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.467454 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:20Z","lastTransitionTime":"2026-02-28T04:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.569694 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.569741 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.569758 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.569786 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.569797 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:20Z","lastTransitionTime":"2026-02-28T04:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.658234 5014 generic.go:334] "Generic (PLEG): container finished" podID="fac15347-d258-4af3-85ab-04ee49634e0a" containerID="030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca" exitCode=0 Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.658347 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" event={"ID":"fac15347-d258-4af3-85ab-04ee49634e0a","Type":"ContainerDied","Data":"030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca"} Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.663287 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerStarted","Data":"01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56"} Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.673379 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.679259 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.681019 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.681039 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.681141 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.681159 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:20Z","lastTransitionTime":"2026-02-28T04:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.695966 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.710931 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.721984 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.737245 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.753348 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.762219 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.773383 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.786235 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.786262 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.786424 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.786436 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.786456 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.786471 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:20Z","lastTransitionTime":"2026-02-28T04:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.800544 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.813470 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.829393 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.843001 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.864289 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.889251 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.889303 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.889316 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.889335 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.889347 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:20Z","lastTransitionTime":"2026-02-28T04:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.991546 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.991597 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.991608 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.991625 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:20 crc kubenswrapper[5014]: I0228 04:35:20.991639 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:20Z","lastTransitionTime":"2026-02-28T04:35:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.094060 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.094090 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.094098 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.094112 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.094120 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:21Z","lastTransitionTime":"2026-02-28T04:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.198103 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.198177 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.198196 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.198638 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.198698 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:21Z","lastTransitionTime":"2026-02-28T04:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.301681 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.301719 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.301728 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.301743 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.301752 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:21Z","lastTransitionTime":"2026-02-28T04:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.404092 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.404124 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.404132 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.404145 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.404154 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:21Z","lastTransitionTime":"2026-02-28T04:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.507348 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.507398 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.507413 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.507434 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.507445 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:21Z","lastTransitionTime":"2026-02-28T04:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.609733 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.609775 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.609787 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.609820 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.609848 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:21Z","lastTransitionTime":"2026-02-28T04:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.640905 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-kqnsx"] Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.641391 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kqnsx" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.643161 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.643382 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.643411 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.643647 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.657935 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.668949 5014 generic.go:334] "Generic (PLEG): container finished" podID="fac15347-d258-4af3-85ab-04ee49634e0a" containerID="3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042" exitCode=0 Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.669005 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" event={"ID":"fac15347-d258-4af3-85ab-04ee49634e0a","Type":"ContainerDied","Data":"3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042"} Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.673632 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.695352 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.706919 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.712220 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.712248 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.712258 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.712272 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.712281 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:21Z","lastTransitionTime":"2026-02-28T04:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.721061 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.732771 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.748112 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.751232 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4nws\" (UniqueName: \"kubernetes.io/projected/71e74c76-dc4a-4ab9-a25a-0e925a384492-kube-api-access-p4nws\") pod \"node-ca-kqnsx\" (UID: \"71e74c76-dc4a-4ab9-a25a-0e925a384492\") " pod="openshift-image-registry/node-ca-kqnsx" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.751276 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/71e74c76-dc4a-4ab9-a25a-0e925a384492-serviceca\") pod \"node-ca-kqnsx\" (UID: \"71e74c76-dc4a-4ab9-a25a-0e925a384492\") " pod="openshift-image-registry/node-ca-kqnsx" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.751315 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71e74c76-dc4a-4ab9-a25a-0e925a384492-host\") pod \"node-ca-kqnsx\" (UID: \"71e74c76-dc4a-4ab9-a25a-0e925a384492\") " pod="openshift-image-registry/node-ca-kqnsx" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.757746 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.770960 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.786990 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.799583 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.810856 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.814364 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.814418 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.814427 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.814440 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.814451 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:21Z","lastTransitionTime":"2026-02-28T04:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.823232 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.836843 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.852885 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4nws\" (UniqueName: \"kubernetes.io/projected/71e74c76-dc4a-4ab9-a25a-0e925a384492-kube-api-access-p4nws\") pod \"node-ca-kqnsx\" (UID: \"71e74c76-dc4a-4ab9-a25a-0e925a384492\") " pod="openshift-image-registry/node-ca-kqnsx" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.852945 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/71e74c76-dc4a-4ab9-a25a-0e925a384492-serviceca\") pod \"node-ca-kqnsx\" (UID: \"71e74c76-dc4a-4ab9-a25a-0e925a384492\") " pod="openshift-image-registry/node-ca-kqnsx" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.852989 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71e74c76-dc4a-4ab9-a25a-0e925a384492-host\") pod \"node-ca-kqnsx\" (UID: \"71e74c76-dc4a-4ab9-a25a-0e925a384492\") " pod="openshift-image-registry/node-ca-kqnsx" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.853050 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/71e74c76-dc4a-4ab9-a25a-0e925a384492-host\") pod \"node-ca-kqnsx\" (UID: \"71e74c76-dc4a-4ab9-a25a-0e925a384492\") " pod="openshift-image-registry/node-ca-kqnsx" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.854252 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.854441 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/71e74c76-dc4a-4ab9-a25a-0e925a384492-serviceca\") pod \"node-ca-kqnsx\" (UID: \"71e74c76-dc4a-4ab9-a25a-0e925a384492\") " pod="openshift-image-registry/node-ca-kqnsx" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.865540 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.874532 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4nws\" (UniqueName: \"kubernetes.io/projected/71e74c76-dc4a-4ab9-a25a-0e925a384492-kube-api-access-p4nws\") pod \"node-ca-kqnsx\" (UID: \"71e74c76-dc4a-4ab9-a25a-0e925a384492\") " pod="openshift-image-registry/node-ca-kqnsx" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.880199 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.891828 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.902322 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.916351 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.918872 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.918914 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.918929 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.918951 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.918967 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:21Z","lastTransitionTime":"2026-02-28T04:35:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.926693 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.950715 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.954103 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kqnsx" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.963821 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:21 crc kubenswrapper[5014]: W0228 04:35:21.973310 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71e74c76_dc4a_4ab9_a25a_0e925a384492.slice/crio-85a068a22e4dae80fa071c15459730d59bc27c756de8850bc22ebc6a436236b8 WatchSource:0}: Error finding container 85a068a22e4dae80fa071c15459730d59bc27c756de8850bc22ebc6a436236b8: Status 404 returned error can't find the container with id 85a068a22e4dae80fa071c15459730d59bc27c756de8850bc22ebc6a436236b8 Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.981773 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:21 crc kubenswrapper[5014]: I0228 04:35:21.992829 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.006441 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.022087 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.023724 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.023763 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.023775 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.023790 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.023800 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:22Z","lastTransitionTime":"2026-02-28T04:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.034872 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.049042 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.068079 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.126199 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.126239 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.126249 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.126262 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.126271 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:22Z","lastTransitionTime":"2026-02-28T04:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.170993 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.170999 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:22 crc kubenswrapper[5014]: E0228 04:35:22.171156 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.171022 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:22 crc kubenswrapper[5014]: E0228 04:35:22.171456 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:22 crc kubenswrapper[5014]: E0228 04:35:22.171259 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.186192 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.200513 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.214338 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.227587 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.230551 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.230598 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.230614 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.230634 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.230646 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:22Z","lastTransitionTime":"2026-02-28T04:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.239544 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.251574 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.269539 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.283792 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.294658 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.319593 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.343610 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.343659 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.343672 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.343689 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.343702 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:22Z","lastTransitionTime":"2026-02-28T04:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.347332 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.366376 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.387709 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.398346 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.417596 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.446354 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.446391 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.446400 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.446413 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.446422 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:22Z","lastTransitionTime":"2026-02-28T04:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.550076 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.550129 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.550140 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.550158 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.550172 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:22Z","lastTransitionTime":"2026-02-28T04:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.653146 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.653655 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.653667 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.653682 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.653694 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:22Z","lastTransitionTime":"2026-02-28T04:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.674708 5014 generic.go:334] "Generic (PLEG): container finished" podID="fac15347-d258-4af3-85ab-04ee49634e0a" containerID="663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34" exitCode=0 Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.674774 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" event={"ID":"fac15347-d258-4af3-85ab-04ee49634e0a","Type":"ContainerDied","Data":"663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34"} Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.675929 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kqnsx" event={"ID":"71e74c76-dc4a-4ab9-a25a-0e925a384492","Type":"ContainerStarted","Data":"85a068a22e4dae80fa071c15459730d59bc27c756de8850bc22ebc6a436236b8"} Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.689869 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.702412 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.728576 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.741924 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.754393 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.759456 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.759514 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.759526 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.759547 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.759560 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:22Z","lastTransitionTime":"2026-02-28T04:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.767655 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.789737 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.803015 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.818935 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.841727 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.859070 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.865308 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.865384 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.865400 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.865423 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.865439 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:22Z","lastTransitionTime":"2026-02-28T04:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.874982 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.892060 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.900395 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.900439 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.900449 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.900465 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.900475 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:22Z","lastTransitionTime":"2026-02-28T04:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.911772 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: E0228 04:35:22.916092 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.924466 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.924526 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.924539 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.924563 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.924579 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:22Z","lastTransitionTime":"2026-02-28T04:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.926019 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: E0228 04:35:22.937310 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.941170 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.941220 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.941234 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.941253 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.941269 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:22Z","lastTransitionTime":"2026-02-28T04:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:22 crc kubenswrapper[5014]: E0228 04:35:22.954587 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.958510 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.959035 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.959046 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.959061 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.959072 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:22Z","lastTransitionTime":"2026-02-28T04:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:22 crc kubenswrapper[5014]: E0228 04:35:22.970108 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.973151 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.973189 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.973210 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.973226 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.973235 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:22Z","lastTransitionTime":"2026-02-28T04:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:22 crc kubenswrapper[5014]: E0228 04:35:22.985277 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:22 crc kubenswrapper[5014]: E0228 04:35:22.985387 5014 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.986723 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.986786 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.986824 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.986850 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:22 crc kubenswrapper[5014]: I0228 04:35:22.986867 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:22Z","lastTransitionTime":"2026-02-28T04:35:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.088953 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.088993 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.089011 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.089028 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.089039 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:23Z","lastTransitionTime":"2026-02-28T04:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.191189 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.191240 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.191255 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.191276 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.191292 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:23Z","lastTransitionTime":"2026-02-28T04:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.298571 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.298625 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.298644 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.298661 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.298673 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:23Z","lastTransitionTime":"2026-02-28T04:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.400966 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.401015 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.401025 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.401042 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.401052 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:23Z","lastTransitionTime":"2026-02-28T04:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.503653 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.503696 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.503706 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.503723 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.503737 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:23Z","lastTransitionTime":"2026-02-28T04:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.606786 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.606857 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.606869 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.606885 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.606895 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:23Z","lastTransitionTime":"2026-02-28T04:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.685154 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" event={"ID":"fac15347-d258-4af3-85ab-04ee49634e0a","Type":"ContainerStarted","Data":"2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb"} Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.696510 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerStarted","Data":"7c32b6bf1146f30d8b88dfecb5d739e93ff01d97e90921e27aa611b9bffb2710"} Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.696835 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.697063 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.698680 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kqnsx" event={"ID":"71e74c76-dc4a-4ab9-a25a-0e925a384492","Type":"ContainerStarted","Data":"2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c"} Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.707537 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:23Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.709104 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.709153 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.709166 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.709186 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.709200 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:23Z","lastTransitionTime":"2026-02-28T04:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.729750 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:23Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.751364 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.751593 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:23Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.768523 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:23Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.782319 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:23Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.798947 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:23Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.812844 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.812900 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.812916 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.812942 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.812956 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:23Z","lastTransitionTime":"2026-02-28T04:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.817180 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:23Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.828482 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:23Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.851112 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:23Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.865930 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:23Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.884719 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:23Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.911269 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:23Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.915491 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.915546 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.915558 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.915577 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.915589 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:23Z","lastTransitionTime":"2026-02-28T04:35:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.933181 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:23Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.952628 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:23Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.967439 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:23Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.985005 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:23Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:23 crc kubenswrapper[5014]: I0228 04:35:23.997516 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:23Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.019616 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.019678 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.019693 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.019716 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.019730 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:24Z","lastTransitionTime":"2026-02-28T04:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.026150 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.039953 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.052711 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.067726 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.090030 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.106295 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.121015 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.122693 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.122821 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.122888 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.122990 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.123060 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:24Z","lastTransitionTime":"2026-02-28T04:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.142771 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.164692 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.171655 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.171757 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.171655 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:24 crc kubenswrapper[5014]: E0228 04:35:24.171933 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:24 crc kubenswrapper[5014]: E0228 04:35:24.172047 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:24 crc kubenswrapper[5014]: E0228 04:35:24.172161 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.181715 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.201652 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c32b6bf1146f30d8b88dfecb5d739e93ff01d97e90921e27aa611b9bffb2710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.225699 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.225741 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.225752 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.225767 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.225780 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:24Z","lastTransitionTime":"2026-02-28T04:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.232053 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.249571 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.328559 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.328629 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.328644 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.328670 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.328687 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:24Z","lastTransitionTime":"2026-02-28T04:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.432130 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.432555 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.432568 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.432585 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.432596 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:24Z","lastTransitionTime":"2026-02-28T04:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.535904 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.535970 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.535984 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.536007 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.536022 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:24Z","lastTransitionTime":"2026-02-28T04:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.639304 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.639361 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.639376 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.639400 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.639418 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:24Z","lastTransitionTime":"2026-02-28T04:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.704670 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.739687 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.743116 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.743160 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.743181 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.743209 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.743233 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:24Z","lastTransitionTime":"2026-02-28T04:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.754429 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.768822 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.787775 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.803103 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.814378 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.837253 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.846044 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.846087 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.846099 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.846116 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.846126 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:24Z","lastTransitionTime":"2026-02-28T04:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.852327 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.867625 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.881570 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.894763 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.908712 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.925010 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c32b6bf1146f30d8b88dfecb5d739e93ff01d97e90921e27aa611b9bffb2710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.937509 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.948329 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.948362 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.948373 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.948391 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.948402 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:24Z","lastTransitionTime":"2026-02-28T04:35:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.949598 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:24 crc kubenswrapper[5014]: I0228 04:35:24.964330 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.051307 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.051356 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.051372 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.051393 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.051407 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:25Z","lastTransitionTime":"2026-02-28T04:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.154283 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.154324 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.154336 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.154355 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.154372 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:25Z","lastTransitionTime":"2026-02-28T04:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.257123 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.257207 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.257226 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.257254 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.257334 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:25Z","lastTransitionTime":"2026-02-28T04:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.360722 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.360839 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.360869 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.360905 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.360936 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:25Z","lastTransitionTime":"2026-02-28T04:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.464498 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.464566 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.464583 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.464613 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.464635 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:25Z","lastTransitionTime":"2026-02-28T04:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.567142 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.567184 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.567200 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.567221 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.567236 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:25Z","lastTransitionTime":"2026-02-28T04:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.670931 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.670998 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.671014 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.671040 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.671058 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:25Z","lastTransitionTime":"2026-02-28T04:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.708335 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37"} Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.737477 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:25Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.755291 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:25Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.774569 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.774613 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.774628 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.774653 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.774667 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:25Z","lastTransitionTime":"2026-02-28T04:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.778270 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:25Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.793679 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:25Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.823559 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:25Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.840659 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:25Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.857292 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:25Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.875413 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:25Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.879418 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.879520 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.879533 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.879576 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.879588 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:25Z","lastTransitionTime":"2026-02-28T04:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.887659 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:25Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.902889 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:25Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.927938 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:25Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.943164 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:25Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.963855 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c32b6bf1146f30d8b88dfecb5d739e93ff01d97e90921e27aa611b9bffb2710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:25Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.976183 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:25Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.982205 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.982264 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.982277 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.982295 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.982309 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:25Z","lastTransitionTime":"2026-02-28T04:35:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:25 crc kubenswrapper[5014]: I0228 04:35:25.990090 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:25Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.085402 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.085467 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.085479 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.085498 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.085523 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:26Z","lastTransitionTime":"2026-02-28T04:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.171296 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.171322 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.171292 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:26 crc kubenswrapper[5014]: E0228 04:35:26.171428 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:26 crc kubenswrapper[5014]: E0228 04:35:26.171528 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:26 crc kubenswrapper[5014]: E0228 04:35:26.171608 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.188412 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.188466 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.188475 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.188491 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.188502 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:26Z","lastTransitionTime":"2026-02-28T04:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.291794 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.291894 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.291905 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.291923 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.291935 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:26Z","lastTransitionTime":"2026-02-28T04:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.395603 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.395645 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.395655 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.395676 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.395688 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:26Z","lastTransitionTime":"2026-02-28T04:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.498693 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.498756 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.498768 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.498791 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.498832 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:26Z","lastTransitionTime":"2026-02-28T04:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.602380 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.602478 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.602514 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.602536 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.602550 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:26Z","lastTransitionTime":"2026-02-28T04:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.705699 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.705756 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.705773 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.705833 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.705861 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:26Z","lastTransitionTime":"2026-02-28T04:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.809400 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.809487 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.809533 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.809552 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.809563 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:26Z","lastTransitionTime":"2026-02-28T04:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.912690 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.912738 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.912748 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.912766 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:26 crc kubenswrapper[5014]: I0228 04:35:26.912778 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:26Z","lastTransitionTime":"2026-02-28T04:35:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.016459 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.016519 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.016533 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.016557 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.016573 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:27Z","lastTransitionTime":"2026-02-28T04:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.120022 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.120092 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.120113 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.120139 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.120158 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:27Z","lastTransitionTime":"2026-02-28T04:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.223776 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.223841 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.223852 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.223871 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.223883 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:27Z","lastTransitionTime":"2026-02-28T04:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.328015 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.328080 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.328097 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.328124 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.328143 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:27Z","lastTransitionTime":"2026-02-28T04:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.432054 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.432106 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.432116 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.432135 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.432154 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:27Z","lastTransitionTime":"2026-02-28T04:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.536422 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.536507 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.536527 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.536567 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.536592 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:27Z","lastTransitionTime":"2026-02-28T04:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.612166 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94"] Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.613300 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.615524 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.615864 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.632721 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:27Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.639302 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.639340 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.639352 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.639369 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.639381 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:27Z","lastTransitionTime":"2026-02-28T04:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.646444 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:27Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.663965 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:27Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.682685 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:27Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.699620 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:27Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.721355 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:27Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.724358 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a93c2fbd-22ea-4935-9d13-0cff87209a82-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lgr94\" (UID: \"a93c2fbd-22ea-4935-9d13-0cff87209a82\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.724439 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a93c2fbd-22ea-4935-9d13-0cff87209a82-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lgr94\" (UID: \"a93c2fbd-22ea-4935-9d13-0cff87209a82\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.724478 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a93c2fbd-22ea-4935-9d13-0cff87209a82-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lgr94\" (UID: \"a93c2fbd-22ea-4935-9d13-0cff87209a82\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.724524 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s5bv\" (UniqueName: \"kubernetes.io/projected/a93c2fbd-22ea-4935-9d13-0cff87209a82-kube-api-access-5s5bv\") pod \"ovnkube-control-plane-749d76644c-lgr94\" (UID: \"a93c2fbd-22ea-4935-9d13-0cff87209a82\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.735330 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:27Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.741712 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.741752 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.741765 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.741786 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.741818 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:27Z","lastTransitionTime":"2026-02-28T04:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.761713 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:27Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.777331 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:27Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.799066 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:27Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.813579 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:27Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.826262 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a93c2fbd-22ea-4935-9d13-0cff87209a82-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lgr94\" (UID: \"a93c2fbd-22ea-4935-9d13-0cff87209a82\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.826352 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a93c2fbd-22ea-4935-9d13-0cff87209a82-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lgr94\" (UID: \"a93c2fbd-22ea-4935-9d13-0cff87209a82\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.826392 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a93c2fbd-22ea-4935-9d13-0cff87209a82-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lgr94\" (UID: \"a93c2fbd-22ea-4935-9d13-0cff87209a82\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.826426 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s5bv\" (UniqueName: \"kubernetes.io/projected/a93c2fbd-22ea-4935-9d13-0cff87209a82-kube-api-access-5s5bv\") pod \"ovnkube-control-plane-749d76644c-lgr94\" (UID: \"a93c2fbd-22ea-4935-9d13-0cff87209a82\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.827875 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/a93c2fbd-22ea-4935-9d13-0cff87209a82-env-overrides\") pod \"ovnkube-control-plane-749d76644c-lgr94\" (UID: \"a93c2fbd-22ea-4935-9d13-0cff87209a82\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.828105 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/a93c2fbd-22ea-4935-9d13-0cff87209a82-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-lgr94\" (UID: \"a93c2fbd-22ea-4935-9d13-0cff87209a82\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.830369 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:27Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.835027 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/a93c2fbd-22ea-4935-9d13-0cff87209a82-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-lgr94\" (UID: \"a93c2fbd-22ea-4935-9d13-0cff87209a82\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.844926 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.845006 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.845021 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.845044 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.845059 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:27Z","lastTransitionTime":"2026-02-28T04:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.851251 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c32b6bf1146f30d8b88dfecb5d739e93ff01d97e90921e27aa611b9bffb2710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:27Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.856613 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s5bv\" (UniqueName: \"kubernetes.io/projected/a93c2fbd-22ea-4935-9d13-0cff87209a82-kube-api-access-5s5bv\") pod \"ovnkube-control-plane-749d76644c-lgr94\" (UID: \"a93c2fbd-22ea-4935-9d13-0cff87209a82\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.870037 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93c2fbd-22ea-4935-9d13-0cff87209a82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:27Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.888491 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:27Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.906032 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:27Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.936637 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.947783 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.947865 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.947878 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.947898 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:27 crc kubenswrapper[5014]: I0228 04:35:27.947913 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:27Z","lastTransitionTime":"2026-02-28T04:35:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.050678 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.050732 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.050757 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.050777 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.050791 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:28Z","lastTransitionTime":"2026-02-28T04:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.154444 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.154489 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.154503 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.154524 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.154538 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:28Z","lastTransitionTime":"2026-02-28T04:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.171376 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.171416 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:28 crc kubenswrapper[5014]: E0228 04:35:28.171515 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.171416 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:28 crc kubenswrapper[5014]: E0228 04:35:28.171617 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:28 crc kubenswrapper[5014]: E0228 04:35:28.171724 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.257688 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.257744 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.257756 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.257778 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.257796 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:28Z","lastTransitionTime":"2026-02-28T04:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.360881 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.360936 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.360955 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.360977 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.360994 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:28Z","lastTransitionTime":"2026-02-28T04:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.366453 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-rqllg"] Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.367170 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:28 crc kubenswrapper[5014]: E0228 04:35:28.367266 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.387394 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.408012 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.424579 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.442846 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.463775 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.463857 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.463875 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.463897 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.463914 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:28Z","lastTransitionTime":"2026-02-28T04:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.464019 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.477027 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.501012 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c32b6bf1146f30d8b88dfecb5d739e93ff01d97e90921e27aa611b9bffb2710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.516430 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93c2fbd-22ea-4935-9d13-0cff87209a82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.534977 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.535706 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pvqr\" (UniqueName: \"kubernetes.io/projected/a2258094-df28-401d-aa20-0931bedcb66b-kube-api-access-6pvqr\") pod \"network-metrics-daemon-rqllg\" (UID: \"a2258094-df28-401d-aa20-0931bedcb66b\") " pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.535794 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs\") pod \"network-metrics-daemon-rqllg\" (UID: \"a2258094-df28-401d-aa20-0931bedcb66b\") " pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.547060 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.562007 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.568112 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.568142 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.568151 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.568165 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.568173 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:28Z","lastTransitionTime":"2026-02-28T04:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.573382 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.601079 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.612901 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.626007 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.636946 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pvqr\" (UniqueName: \"kubernetes.io/projected/a2258094-df28-401d-aa20-0931bedcb66b-kube-api-access-6pvqr\") pod \"network-metrics-daemon-rqllg\" (UID: \"a2258094-df28-401d-aa20-0931bedcb66b\") " pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.637012 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs\") pod \"network-metrics-daemon-rqllg\" (UID: \"a2258094-df28-401d-aa20-0931bedcb66b\") " pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:28 crc kubenswrapper[5014]: E0228 04:35:28.637120 5014 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 04:35:28 crc kubenswrapper[5014]: E0228 04:35:28.637164 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs podName:a2258094-df28-401d-aa20-0931bedcb66b nodeName:}" failed. No retries permitted until 2026-02-28 04:35:29.13714933 +0000 UTC m=+117.807275230 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs") pod "network-metrics-daemon-rqllg" (UID: "a2258094-df28-401d-aa20-0931bedcb66b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.639536 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.651994 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2258094-df28-401d-aa20-0931bedcb66b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.656763 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pvqr\" (UniqueName: \"kubernetes.io/projected/a2258094-df28-401d-aa20-0931bedcb66b-kube-api-access-6pvqr\") pod \"network-metrics-daemon-rqllg\" (UID: \"a2258094-df28-401d-aa20-0931bedcb66b\") " pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.670617 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.670702 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.670721 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.670737 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.670747 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:28Z","lastTransitionTime":"2026-02-28T04:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.719762 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" event={"ID":"a93c2fbd-22ea-4935-9d13-0cff87209a82","Type":"ContainerStarted","Data":"ed1d996cd1300f3bb16b61b994ef4f54dc63d091344c9f5ba0352c9c0770e8fb"} Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.719884 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" event={"ID":"a93c2fbd-22ea-4935-9d13-0cff87209a82","Type":"ContainerStarted","Data":"591fc958001d315afef158a50ac846037571973aca5af468a6db1f003fc79ff8"} Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.721265 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-62hnq_faa5db1f-df50-492a-9d45-d5065bdc63d2/ovnkube-controller/0.log" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.723560 5014 generic.go:334] "Generic (PLEG): container finished" podID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerID="7c32b6bf1146f30d8b88dfecb5d739e93ff01d97e90921e27aa611b9bffb2710" exitCode=1 Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.723597 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerDied","Data":"7c32b6bf1146f30d8b88dfecb5d739e93ff01d97e90921e27aa611b9bffb2710"} Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.724538 5014 scope.go:117] "RemoveContainer" containerID="7c32b6bf1146f30d8b88dfecb5d739e93ff01d97e90921e27aa611b9bffb2710" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.745042 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.758670 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.773709 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.775214 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.775235 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.775243 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.775257 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.775266 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:28Z","lastTransitionTime":"2026-02-28T04:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.788994 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.806340 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.820960 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.836828 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2258094-df28-401d-aa20-0931bedcb66b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.852602 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.869633 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.877553 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.877601 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.877615 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.877634 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.877647 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:28Z","lastTransitionTime":"2026-02-28T04:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.884600 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.903898 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.919556 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.934206 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.959631 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c32b6bf1146f30d8b88dfecb5d739e93ff01d97e90921e27aa611b9bffb2710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c32b6bf1146f30d8b88dfecb5d739e93ff01d97e90921e27aa611b9bffb2710\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"message\\\":\\\" 6798 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0228 04:35:27.100793 6798 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 04:35:27.101185 6798 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 04:35:27.101541 6798 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0228 04:35:27.101591 6798 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 04:35:27.101836 6798 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 04:35:27.102448 6798 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 04:35:27.103023 6798 factory.go:656] Stopping watch factory\\\\nI0228 04:35:27.103099 6798 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.980906 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.980963 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.980976 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.980997 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.981012 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:28Z","lastTransitionTime":"2026-02-28T04:35:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:28 crc kubenswrapper[5014]: I0228 04:35:28.985131 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93c2fbd-22ea-4935-9d13-0cff87209a82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:28Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.016925 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:29Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.052938 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:29Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.083954 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.084002 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.084015 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.084034 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.084045 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:29Z","lastTransitionTime":"2026-02-28T04:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.146884 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs\") pod \"network-metrics-daemon-rqllg\" (UID: \"a2258094-df28-401d-aa20-0931bedcb66b\") " pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:29 crc kubenswrapper[5014]: E0228 04:35:29.147036 5014 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 04:35:29 crc kubenswrapper[5014]: E0228 04:35:29.147103 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs podName:a2258094-df28-401d-aa20-0931bedcb66b nodeName:}" failed. No retries permitted until 2026-02-28 04:35:30.147086331 +0000 UTC m=+118.817212241 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs") pod "network-metrics-daemon-rqllg" (UID: "a2258094-df28-401d-aa20-0931bedcb66b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.171777 5014 scope.go:117] "RemoveContainer" containerID="acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.187224 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.187266 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.187276 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.187293 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.187304 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:29Z","lastTransitionTime":"2026-02-28T04:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.289943 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.290033 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.290051 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.290075 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.290097 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:29Z","lastTransitionTime":"2026-02-28T04:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.392981 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.393043 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.393059 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.393078 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.393092 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:29Z","lastTransitionTime":"2026-02-28T04:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.497299 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.497349 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.497361 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.497377 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.497390 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:29Z","lastTransitionTime":"2026-02-28T04:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.600451 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.600840 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.600854 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.600870 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.600880 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:29Z","lastTransitionTime":"2026-02-28T04:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.703563 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.703641 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.703654 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.703681 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.703694 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:29Z","lastTransitionTime":"2026-02-28T04:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.729498 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" event={"ID":"a93c2fbd-22ea-4935-9d13-0cff87209a82","Type":"ContainerStarted","Data":"9dd97c6cb9aa1bad68bc5df66df1561ea3b9e38dabe1aa17c100f913e6a3e0aa"} Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.732148 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.734060 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"719151f8073cf997e9ca1087dcbb47e8f83f0e25a700251737c5a5d4b39bfde4"} Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.734274 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.736548 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-62hnq_faa5db1f-df50-492a-9d45-d5065bdc63d2/ovnkube-controller/0.log" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.739233 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerStarted","Data":"6e4c9ed4ecb01a842a39956de2f374df82394dd4c46ac4b11eb627ba0cd691ef"} Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.739605 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.745076 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:29Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.767446 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:29Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.790864 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c32b6bf1146f30d8b88dfecb5d739e93ff01d97e90921e27aa611b9bffb2710\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c32b6bf1146f30d8b88dfecb5d739e93ff01d97e90921e27aa611b9bffb2710\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"message\\\":\\\" 6798 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0228 04:35:27.100793 6798 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 04:35:27.101185 6798 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 04:35:27.101541 6798 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0228 04:35:27.101591 6798 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 04:35:27.101836 6798 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 04:35:27.102448 6798 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 04:35:27.103023 6798 factory.go:656] Stopping watch factory\\\\nI0228 04:35:27.103099 6798 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:29Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.805995 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.806024 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.806031 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.806044 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.806052 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:29Z","lastTransitionTime":"2026-02-28T04:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.806248 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93c2fbd-22ea-4935-9d13-0cff87209a82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed1d996cd1300f3bb16b61b994ef4f54dc63d091344c9f5ba0352c9c0770e8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd97c6cb9aa1bad68bc5df66df1561ea3b9e38dabe1aa17c100f913e6a3e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:29Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.818856 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:29Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.829800 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:29Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.855571 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:29Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.868883 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:29Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.883501 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:29Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.896336 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:29Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.909493 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.909526 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.909535 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.909550 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.909560 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:29Z","lastTransitionTime":"2026-02-28T04:35:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.913626 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:29Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.924912 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:29Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.938117 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2258094-df28-401d-aa20-0931bedcb66b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:29Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.949791 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:29Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.962085 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:29Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.974450 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:29Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:29 crc kubenswrapper[5014]: I0228 04:35:29.989137 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:29Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.007045 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.012007 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.012049 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.012058 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.012091 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.012100 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:30Z","lastTransitionTime":"2026-02-28T04:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.018474 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.037180 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.048707 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.062334 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.074562 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.086037 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2258094-df28-401d-aa20-0931bedcb66b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.102865 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.115053 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.115096 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.115106 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.115121 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.115130 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:30Z","lastTransitionTime":"2026-02-28T04:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.122461 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.141715 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.158943 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.171466 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.171564 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:30 crc kubenswrapper[5014]: E0228 04:35:30.171616 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.171627 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs\") pod \"network-metrics-daemon-rqllg\" (UID: \"a2258094-df28-401d-aa20-0931bedcb66b\") " pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.171684 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:30 crc kubenswrapper[5014]: E0228 04:35:30.171722 5014 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 04:35:30 crc kubenswrapper[5014]: E0228 04:35:30.171718 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:35:30 crc kubenswrapper[5014]: E0228 04:35:30.171784 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs podName:a2258094-df28-401d-aa20-0931bedcb66b nodeName:}" failed. No retries permitted until 2026-02-28 04:35:32.171764334 +0000 UTC m=+120.841890454 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs") pod "network-metrics-daemon-rqllg" (UID: "a2258094-df28-401d-aa20-0931bedcb66b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.171466 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:30 crc kubenswrapper[5014]: E0228 04:35:30.171890 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:30 crc kubenswrapper[5014]: E0228 04:35:30.172018 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.183242 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719151f8073cf997e9ca1087dcbb47e8f83f0e25a700251737c5a5d4b39bfde4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.205964 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.218105 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.218161 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.218175 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.218196 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.218208 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:30Z","lastTransitionTime":"2026-02-28T04:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.234778 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4c9ed4ecb01a842a39956de2f374df82394dd4c46ac4b11eb627ba0cd691ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c32b6bf1146f30d8b88dfecb5d739e93ff01d97e90921e27aa611b9bffb2710\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"message\\\":\\\" 6798 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0228 04:35:27.100793 6798 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 04:35:27.101185 6798 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 04:35:27.101541 6798 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0228 04:35:27.101591 6798 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 04:35:27.101836 6798 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 04:35:27.102448 6798 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 04:35:27.103023 6798 factory.go:656] Stopping watch factory\\\\nI0228 04:35:27.103099 6798 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.248350 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93c2fbd-22ea-4935-9d13-0cff87209a82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed1d996cd1300f3bb16b61b994ef4f54dc63d091344c9f5ba0352c9c0770e8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd97c6cb9aa1bad68bc5df66df1561ea3b9e38dabe1aa17c100f913e6a3e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.273179 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.288889 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.320975 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.321031 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.321045 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.321065 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.321080 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:30Z","lastTransitionTime":"2026-02-28T04:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.423727 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.423785 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.423798 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.423896 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.423911 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:30Z","lastTransitionTime":"2026-02-28T04:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.526258 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.526309 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.526321 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.526340 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.526352 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:30Z","lastTransitionTime":"2026-02-28T04:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.629582 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.630045 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.630055 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.630070 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.630078 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:30Z","lastTransitionTime":"2026-02-28T04:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.732277 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.732330 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.732342 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.732360 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.732373 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:30Z","lastTransitionTime":"2026-02-28T04:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.743862 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-62hnq_faa5db1f-df50-492a-9d45-d5065bdc63d2/ovnkube-controller/1.log" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.744571 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-62hnq_faa5db1f-df50-492a-9d45-d5065bdc63d2/ovnkube-controller/0.log" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.747773 5014 generic.go:334] "Generic (PLEG): container finished" podID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerID="6e4c9ed4ecb01a842a39956de2f374df82394dd4c46ac4b11eb627ba0cd691ef" exitCode=1 Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.747837 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerDied","Data":"6e4c9ed4ecb01a842a39956de2f374df82394dd4c46ac4b11eb627ba0cd691ef"} Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.747896 5014 scope.go:117] "RemoveContainer" containerID="7c32b6bf1146f30d8b88dfecb5d739e93ff01d97e90921e27aa611b9bffb2710" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.748844 5014 scope.go:117] "RemoveContainer" containerID="6e4c9ed4ecb01a842a39956de2f374df82394dd4c46ac4b11eb627ba0cd691ef" Feb 28 04:35:30 crc kubenswrapper[5014]: E0228 04:35:30.749014 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-62hnq_openshift-ovn-kubernetes(faa5db1f-df50-492a-9d45-d5065bdc63d2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.764173 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.777778 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.793237 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.808918 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.820507 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.835132 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.835323 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.835409 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.835468 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.835525 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:30Z","lastTransitionTime":"2026-02-28T04:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.841684 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.855003 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2258094-df28-401d-aa20-0931bedcb66b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.867657 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.883739 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.899441 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.914417 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.929669 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93c2fbd-22ea-4935-9d13-0cff87209a82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed1d996cd1300f3bb16b61b994ef4f54dc63d091344c9f5ba0352c9c0770e8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd97c6cb9aa1bad68bc5df66df1561ea3b9e38dabe1aa17c100f913e6a3e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.938411 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.938439 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.938447 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.938461 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.938469 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:30Z","lastTransitionTime":"2026-02-28T04:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.947377 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719151f8073cf997e9ca1087dcbb47e8f83f0e25a700251737c5a5d4b39bfde4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.960718 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.977471 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4c9ed4ecb01a842a39956de2f374df82394dd4c46ac4b11eb627ba0cd691ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7c32b6bf1146f30d8b88dfecb5d739e93ff01d97e90921e27aa611b9bffb2710\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"message\\\":\\\" 6798 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0228 04:35:27.100793 6798 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 04:35:27.101185 6798 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 04:35:27.101541 6798 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0228 04:35:27.101591 6798 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 04:35:27.101836 6798 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0228 04:35:27.102448 6798 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0228 04:35:27.103023 6798 factory.go:656] Stopping watch factory\\\\nI0228 04:35:27.103099 6798 handler.go:208] Removed *v1.EgressIP ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e4c9ed4ecb01a842a39956de2f374df82394dd4c46ac4b11eb627ba0cd691ef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"message\\\":\\\"e\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0228 04:35:29.894300 7027 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.987491 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:30 crc kubenswrapper[5014]: I0228 04:35:30.996837 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:30Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.040459 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.040500 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.040513 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.040530 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.040543 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:31Z","lastTransitionTime":"2026-02-28T04:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.142833 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.142909 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.142922 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.142940 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.142959 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:31Z","lastTransitionTime":"2026-02-28T04:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.245673 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.245702 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.245713 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.245727 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.245737 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:31Z","lastTransitionTime":"2026-02-28T04:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.347230 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.347271 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.347280 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.347293 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.347302 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:31Z","lastTransitionTime":"2026-02-28T04:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.449761 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.449834 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.449848 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.449869 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.449882 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:31Z","lastTransitionTime":"2026-02-28T04:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.553540 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.553600 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.553612 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.553631 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.553644 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:31Z","lastTransitionTime":"2026-02-28T04:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.656336 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.656396 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.656413 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.656437 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.656455 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:31Z","lastTransitionTime":"2026-02-28T04:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.753209 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-62hnq_faa5db1f-df50-492a-9d45-d5065bdc63d2/ovnkube-controller/1.log" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.757936 5014 scope.go:117] "RemoveContainer" containerID="6e4c9ed4ecb01a842a39956de2f374df82394dd4c46ac4b11eb627ba0cd691ef" Feb 28 04:35:31 crc kubenswrapper[5014]: E0228 04:35:31.758164 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-62hnq_openshift-ovn-kubernetes(faa5db1f-df50-492a-9d45-d5065bdc63d2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.758588 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.758647 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.758662 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.758684 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.758698 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:31Z","lastTransitionTime":"2026-02-28T04:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.773200 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:31Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.787542 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:31Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.804994 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:31Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.821471 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:31Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.837493 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:31Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.855676 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:31Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.861475 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.861519 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.861532 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.861555 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.861572 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:31Z","lastTransitionTime":"2026-02-28T04:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.871388 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:31Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.899337 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:31Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.912357 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2258094-df28-401d-aa20-0931bedcb66b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:31Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.927274 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:31Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.944339 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:31Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.959677 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:31Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.965225 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.965402 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.965500 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.965601 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.965682 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:31Z","lastTransitionTime":"2026-02-28T04:35:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.974761 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:31Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.992860 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:35:31 crc kubenswrapper[5014]: E0228 04:35:31.993030 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:36:03.99299957 +0000 UTC m=+152.663125480 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.993383 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:31 crc kubenswrapper[5014]: I0228 04:35:31.993500 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:31 crc kubenswrapper[5014]: E0228 04:35:31.993521 5014 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 04:35:31 crc kubenswrapper[5014]: E0228 04:35:31.993741 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 04:36:03.993731811 +0000 UTC m=+152.663857721 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 04:35:31 crc kubenswrapper[5014]: E0228 04:35:31.993764 5014 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 04:35:31 crc kubenswrapper[5014]: E0228 04:35:31.993922 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 04:36:03.993911355 +0000 UTC m=+152.664037265 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.021870 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4c9ed4ecb01a842a39956de2f374df82394dd4c46ac4b11eb627ba0cd691ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e4c9ed4ecb01a842a39956de2f374df82394dd4c46ac4b11eb627ba0cd691ef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"message\\\":\\\"e\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0228 04:35:29.894300 7027 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-62hnq_openshift-ovn-kubernetes(faa5db1f-df50-492a-9d45-d5065bdc63d2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.047323 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93c2fbd-22ea-4935-9d13-0cff87209a82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed1d996cd1300f3bb16b61b994ef4f54dc63d091344c9f5ba0352c9c0770e8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd97c6cb9aa1bad68bc5df66df1561ea3b9e38dabe1aa17c100f913e6a3e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.068109 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.068153 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.068163 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.068182 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.068192 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:32Z","lastTransitionTime":"2026-02-28T04:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.068112 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719151f8073cf997e9ca1087dcbb47e8f83f0e25a700251737c5a5d4b39bfde4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.090653 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.094007 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.094059 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:32 crc kubenswrapper[5014]: E0228 04:35:32.094190 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 04:35:32 crc kubenswrapper[5014]: E0228 04:35:32.094211 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 04:35:32 crc kubenswrapper[5014]: E0228 04:35:32.094222 5014 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:32 crc kubenswrapper[5014]: E0228 04:35:32.094267 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 04:36:04.094249573 +0000 UTC m=+152.764375483 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:32 crc kubenswrapper[5014]: E0228 04:35:32.094190 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 04:35:32 crc kubenswrapper[5014]: E0228 04:35:32.094294 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 04:35:32 crc kubenswrapper[5014]: E0228 04:35:32.094301 5014 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:32 crc kubenswrapper[5014]: E0228 04:35:32.094324 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 04:36:04.094318275 +0000 UTC m=+152.764444185 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:35:32 crc kubenswrapper[5014]: E0228 04:35:32.169148 5014 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.171541 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.171567 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.171603 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:32 crc kubenswrapper[5014]: E0228 04:35:32.171763 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.171971 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:32 crc kubenswrapper[5014]: E0228 04:35:32.172055 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:32 crc kubenswrapper[5014]: E0228 04:35:32.172294 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:35:32 crc kubenswrapper[5014]: E0228 04:35:32.172738 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.182253 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.187211 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93c2fbd-22ea-4935-9d13-0cff87209a82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed1d996cd1300f3bb16b61b994ef4f54dc63d091344c9f5ba0352c9c0770e8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd97c6cb9aa1bad68bc5df66df1561ea3b9e38dabe1aa17c100f913e6a3e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.194710 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs\") pod \"network-metrics-daemon-rqllg\" (UID: \"a2258094-df28-401d-aa20-0931bedcb66b\") " pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:32 crc kubenswrapper[5014]: E0228 04:35:32.194862 5014 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 04:35:32 crc kubenswrapper[5014]: E0228 04:35:32.194917 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs podName:a2258094-df28-401d-aa20-0931bedcb66b nodeName:}" failed. No retries permitted until 2026-02-28 04:35:36.194903558 +0000 UTC m=+124.865029468 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs") pod "network-metrics-daemon-rqllg" (UID: "a2258094-df28-401d-aa20-0931bedcb66b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.202526 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719151f8073cf997e9ca1087dcbb47e8f83f0e25a700251737c5a5d4b39bfde4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.216019 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.235980 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4c9ed4ecb01a842a39956de2f374df82394dd4c46ac4b11eb627ba0cd691ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e4c9ed4ecb01a842a39956de2f374df82394dd4c46ac4b11eb627ba0cd691ef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"message\\\":\\\"e\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0228 04:35:29.894300 7027 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-62hnq_openshift-ovn-kubernetes(faa5db1f-df50-492a-9d45-d5065bdc63d2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.249514 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.262591 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:32 crc kubenswrapper[5014]: E0228 04:35:32.268025 5014 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.276192 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.290004 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.306281 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.323864 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.336938 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.357780 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.370882 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2258094-df28-401d-aa20-0931bedcb66b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.389649 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.407963 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.423015 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:32 crc kubenswrapper[5014]: I0228 04:35:32.438072 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.143958 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.144008 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.144018 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.144034 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.144044 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:33Z","lastTransitionTime":"2026-02-28T04:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:33 crc kubenswrapper[5014]: E0228 04:35:33.157567 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:33Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.161986 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.162029 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.162042 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.162060 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.162074 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:33Z","lastTransitionTime":"2026-02-28T04:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:33 crc kubenswrapper[5014]: E0228 04:35:33.174841 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:33Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.181013 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.181083 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.181103 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.181131 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.181150 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:33Z","lastTransitionTime":"2026-02-28T04:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:33 crc kubenswrapper[5014]: E0228 04:35:33.195280 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:33Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.199842 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.199905 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.199922 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.199946 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.199967 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:33Z","lastTransitionTime":"2026-02-28T04:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:33 crc kubenswrapper[5014]: E0228 04:35:33.216405 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:33Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.220967 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.221031 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.221046 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.221073 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:33 crc kubenswrapper[5014]: I0228 04:35:33.221089 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:33Z","lastTransitionTime":"2026-02-28T04:35:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:33 crc kubenswrapper[5014]: E0228 04:35:33.238797 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:33Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:33 crc kubenswrapper[5014]: E0228 04:35:33.239070 5014 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 04:35:34 crc kubenswrapper[5014]: I0228 04:35:34.171254 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:34 crc kubenswrapper[5014]: I0228 04:35:34.171344 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:34 crc kubenswrapper[5014]: E0228 04:35:34.171418 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:35:34 crc kubenswrapper[5014]: I0228 04:35:34.171254 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:34 crc kubenswrapper[5014]: E0228 04:35:34.171478 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:34 crc kubenswrapper[5014]: I0228 04:35:34.171283 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:34 crc kubenswrapper[5014]: E0228 04:35:34.171536 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:34 crc kubenswrapper[5014]: E0228 04:35:34.171558 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:36 crc kubenswrapper[5014]: I0228 04:35:36.171193 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:36 crc kubenswrapper[5014]: I0228 04:35:36.171216 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:36 crc kubenswrapper[5014]: I0228 04:35:36.171277 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:36 crc kubenswrapper[5014]: I0228 04:35:36.171194 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:36 crc kubenswrapper[5014]: E0228 04:35:36.171681 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:36 crc kubenswrapper[5014]: E0228 04:35:36.171869 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:35:36 crc kubenswrapper[5014]: E0228 04:35:36.171978 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:36 crc kubenswrapper[5014]: E0228 04:35:36.172081 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:36 crc kubenswrapper[5014]: I0228 04:35:36.233992 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs\") pod \"network-metrics-daemon-rqllg\" (UID: \"a2258094-df28-401d-aa20-0931bedcb66b\") " pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:36 crc kubenswrapper[5014]: E0228 04:35:36.234196 5014 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 04:35:36 crc kubenswrapper[5014]: E0228 04:35:36.234277 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs podName:a2258094-df28-401d-aa20-0931bedcb66b nodeName:}" failed. No retries permitted until 2026-02-28 04:35:44.234258418 +0000 UTC m=+132.904384328 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs") pod "network-metrics-daemon-rqllg" (UID: "a2258094-df28-401d-aa20-0931bedcb66b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 04:35:37 crc kubenswrapper[5014]: E0228 04:35:37.268938 5014 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 04:35:38 crc kubenswrapper[5014]: I0228 04:35:38.170926 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:38 crc kubenswrapper[5014]: I0228 04:35:38.170964 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:38 crc kubenswrapper[5014]: I0228 04:35:38.171036 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:38 crc kubenswrapper[5014]: I0228 04:35:38.171070 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:38 crc kubenswrapper[5014]: E0228 04:35:38.171325 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:38 crc kubenswrapper[5014]: E0228 04:35:38.171615 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:38 crc kubenswrapper[5014]: E0228 04:35:38.171666 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:38 crc kubenswrapper[5014]: E0228 04:35:38.171685 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:35:40 crc kubenswrapper[5014]: I0228 04:35:40.171387 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:40 crc kubenswrapper[5014]: I0228 04:35:40.171447 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:40 crc kubenswrapper[5014]: E0228 04:35:40.171544 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:40 crc kubenswrapper[5014]: I0228 04:35:40.171556 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:40 crc kubenswrapper[5014]: I0228 04:35:40.171587 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:40 crc kubenswrapper[5014]: E0228 04:35:40.171624 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:40 crc kubenswrapper[5014]: E0228 04:35:40.171754 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:40 crc kubenswrapper[5014]: E0228 04:35:40.171885 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:35:42 crc kubenswrapper[5014]: I0228 04:35:42.171214 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:42 crc kubenswrapper[5014]: E0228 04:35:42.171383 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:42 crc kubenswrapper[5014]: I0228 04:35:42.171937 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:42 crc kubenswrapper[5014]: E0228 04:35:42.172011 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:35:42 crc kubenswrapper[5014]: I0228 04:35:42.172058 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:42 crc kubenswrapper[5014]: E0228 04:35:42.172195 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:42 crc kubenswrapper[5014]: I0228 04:35:42.172334 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:42 crc kubenswrapper[5014]: E0228 04:35:42.172382 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:42 crc kubenswrapper[5014]: I0228 04:35:42.201381 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4c9ed4ecb01a842a39956de2f374df82394dd4c46ac4b11eb627ba0cd691ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e4c9ed4ecb01a842a39956de2f374df82394dd4c46ac4b11eb627ba0cd691ef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"message\\\":\\\"e\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0228 04:35:29.894300 7027 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-62hnq_openshift-ovn-kubernetes(faa5db1f-df50-492a-9d45-d5065bdc63d2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:42Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:42 crc kubenswrapper[5014]: I0228 04:35:42.213374 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93c2fbd-22ea-4935-9d13-0cff87209a82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed1d996cd1300f3bb16b61b994ef4f54dc63d091344c9f5ba0352c9c0770e8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd97c6cb9aa1bad68bc5df66df1561ea3b9e38dabe1aa17c100f913e6a3e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:42Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:42 crc kubenswrapper[5014]: I0228 04:35:42.227684 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719151f8073cf997e9ca1087dcbb47e8f83f0e25a700251737c5a5d4b39bfde4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:42Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:42 crc kubenswrapper[5014]: I0228 04:35:42.237845 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:42Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:42 crc kubenswrapper[5014]: I0228 04:35:42.246553 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:42Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:42 crc kubenswrapper[5014]: I0228 04:35:42.254352 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:42Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:42 crc kubenswrapper[5014]: I0228 04:35:42.264307 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2078c764-fc6e-49ec-a14b-c3ec7a2d5d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849ae83a4bc8172d4bc0c361d8f565dcf9d1d71e833d4d875481bbe8b7eca349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a4e3ad977a015ef15dad7b23433f35d16245c4d2d38b6008a20070f72a20e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0543da82ec1087bd70c14cbb530ed3ee36e372c2d8180bff84cb16d5c4d0d0eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:42Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:42 crc kubenswrapper[5014]: E0228 04:35:42.269917 5014 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 04:35:42 crc kubenswrapper[5014]: I0228 04:35:42.281859 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:42Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:42 crc kubenswrapper[5014]: I0228 04:35:42.292934 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:42Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:42 crc kubenswrapper[5014]: I0228 04:35:42.303586 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:42Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:42 crc kubenswrapper[5014]: I0228 04:35:42.315108 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:42Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:42 crc kubenswrapper[5014]: I0228 04:35:42.327617 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:42Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:42 crc kubenswrapper[5014]: I0228 04:35:42.345207 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:42Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:42 crc kubenswrapper[5014]: I0228 04:35:42.357869 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2258094-df28-401d-aa20-0931bedcb66b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:42Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:42 crc kubenswrapper[5014]: I0228 04:35:42.370309 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:42Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:42 crc kubenswrapper[5014]: I0228 04:35:42.384158 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:42Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:42 crc kubenswrapper[5014]: I0228 04:35:42.396283 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:42Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:42 crc kubenswrapper[5014]: I0228 04:35:42.408125 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:42Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.188659 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.508454 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.508493 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.508505 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.508520 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.508533 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:43Z","lastTransitionTime":"2026-02-28T04:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:43 crc kubenswrapper[5014]: E0228 04:35:43.525515 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:43Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.531428 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.531488 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.531506 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.531533 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.531591 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:43Z","lastTransitionTime":"2026-02-28T04:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:43 crc kubenswrapper[5014]: E0228 04:35:43.553030 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:43Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.558851 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.558927 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.558982 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.559009 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.559026 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:43Z","lastTransitionTime":"2026-02-28T04:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:43 crc kubenswrapper[5014]: E0228 04:35:43.579197 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:43Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.585013 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.585081 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.585094 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.585113 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.585127 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:43Z","lastTransitionTime":"2026-02-28T04:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:43 crc kubenswrapper[5014]: E0228 04:35:43.604893 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:43Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.610286 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.610332 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.610342 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.610359 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:43 crc kubenswrapper[5014]: I0228 04:35:43.610369 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:43Z","lastTransitionTime":"2026-02-28T04:35:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:43 crc kubenswrapper[5014]: E0228 04:35:43.629407 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:43Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:43 crc kubenswrapper[5014]: E0228 04:35:43.629650 5014 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 04:35:44 crc kubenswrapper[5014]: I0228 04:35:44.170984 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:44 crc kubenswrapper[5014]: I0228 04:35:44.171067 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:44 crc kubenswrapper[5014]: I0228 04:35:44.171137 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:44 crc kubenswrapper[5014]: I0228 04:35:44.171298 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:44 crc kubenswrapper[5014]: E0228 04:35:44.171277 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:44 crc kubenswrapper[5014]: E0228 04:35:44.171502 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:44 crc kubenswrapper[5014]: E0228 04:35:44.171528 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:44 crc kubenswrapper[5014]: E0228 04:35:44.171566 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:35:44 crc kubenswrapper[5014]: I0228 04:35:44.321053 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs\") pod \"network-metrics-daemon-rqllg\" (UID: \"a2258094-df28-401d-aa20-0931bedcb66b\") " pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:44 crc kubenswrapper[5014]: E0228 04:35:44.321208 5014 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 04:35:44 crc kubenswrapper[5014]: E0228 04:35:44.321261 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs podName:a2258094-df28-401d-aa20-0931bedcb66b nodeName:}" failed. No retries permitted until 2026-02-28 04:36:00.32124482 +0000 UTC m=+148.991370730 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs") pod "network-metrics-daemon-rqllg" (UID: "a2258094-df28-401d-aa20-0931bedcb66b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 04:35:46 crc kubenswrapper[5014]: I0228 04:35:46.171088 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:46 crc kubenswrapper[5014]: I0228 04:35:46.171118 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:46 crc kubenswrapper[5014]: I0228 04:35:46.171217 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:46 crc kubenswrapper[5014]: E0228 04:35:46.171224 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:46 crc kubenswrapper[5014]: I0228 04:35:46.171084 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:46 crc kubenswrapper[5014]: E0228 04:35:46.171500 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:46 crc kubenswrapper[5014]: E0228 04:35:46.171528 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:46 crc kubenswrapper[5014]: E0228 04:35:46.171575 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:35:47 crc kubenswrapper[5014]: I0228 04:35:47.173297 5014 scope.go:117] "RemoveContainer" containerID="6e4c9ed4ecb01a842a39956de2f374df82394dd4c46ac4b11eb627ba0cd691ef" Feb 28 04:35:47 crc kubenswrapper[5014]: E0228 04:35:47.271425 5014 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 04:35:47 crc kubenswrapper[5014]: I0228 04:35:47.640681 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:35:47 crc kubenswrapper[5014]: I0228 04:35:47.665409 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:47Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:47 crc kubenswrapper[5014]: I0228 04:35:47.691417 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:47Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:47 crc kubenswrapper[5014]: I0228 04:35:47.728279 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:47Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:47 crc kubenswrapper[5014]: I0228 04:35:47.764944 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:47Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:47 crc kubenswrapper[5014]: I0228 04:35:47.788288 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719151f8073cf997e9ca1087dcbb47e8f83f0e25a700251737c5a5d4b39bfde4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:47Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:47 crc kubenswrapper[5014]: I0228 04:35:47.816113 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:47Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:47 crc kubenswrapper[5014]: I0228 04:35:47.825091 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-62hnq_faa5db1f-df50-492a-9d45-d5065bdc63d2/ovnkube-controller/1.log" Feb 28 04:35:47 crc kubenswrapper[5014]: I0228 04:35:47.828497 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerStarted","Data":"6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2"} Feb 28 04:35:47 crc kubenswrapper[5014]: I0228 04:35:47.828947 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:35:47 crc kubenswrapper[5014]: I0228 04:35:47.836349 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6e4c9ed4ecb01a842a39956de2f374df82394dd4c46ac4b11eb627ba0cd691ef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e4c9ed4ecb01a842a39956de2f374df82394dd4c46ac4b11eb627ba0cd691ef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"message\\\":\\\"e\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0228 04:35:29.894300 7027 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-62hnq_openshift-ovn-kubernetes(faa5db1f-df50-492a-9d45-d5065bdc63d2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:47Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:47 crc kubenswrapper[5014]: I0228 04:35:47.849465 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93c2fbd-22ea-4935-9d13-0cff87209a82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed1d996cd1300f3bb16b61b994ef4f54dc63d091344c9f5ba0352c9c0770e8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd97c6cb9aa1bad68bc5df66df1561ea3b9e38dabe1aa17c100f913e6a3e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:47Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:47 crc kubenswrapper[5014]: I0228 04:35:47.862125 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:47Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:47 crc kubenswrapper[5014]: I0228 04:35:47.876726 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:47Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:47 crc kubenswrapper[5014]: I0228 04:35:47.904773 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:47Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:47 crc kubenswrapper[5014]: I0228 04:35:47.924147 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396380eb-5c77-43a8-9a21-814c3e8888f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714067117f5f948a99fcf525805dbc84659d7486a7d90a54aa54a9b924c7cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7968cb4f05079288706430331f9a9b96767af7ae0cafa8f46bf17c437a39275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 04:33:34.285189 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 04:33:34.288153 1 observer_polling.go:159] Starting file observer\\\\nI0228 04:33:34.331271 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 04:33:34.337611 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 04:34:02.261141 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 04:34:02.261276 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d814b990a8048b944b6ca2b8a1aa5b585368ce3a5d89b7b0993e92a291fa9fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91526622180b27c5558c8064508b1328f9196ac7ffd6f9c5f86bc616d9b6248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2127cf622ac7487c457107170b279eee2bd4abf6ce87378e3b8a17423a25c812\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:47Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:47 crc kubenswrapper[5014]: I0228 04:35:47.940920 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2078c764-fc6e-49ec-a14b-c3ec7a2d5d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849ae83a4bc8172d4bc0c361d8f565dcf9d1d71e833d4d875481bbe8b7eca349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a4e3ad977a015ef15dad7b23433f35d16245c4d2d38b6008a20070f72a20e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0543da82ec1087bd70c14cbb530ed3ee36e372c2d8180bff84cb16d5c4d0d0eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:47Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:47 crc kubenswrapper[5014]: I0228 04:35:47.965760 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:47Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:47 crc kubenswrapper[5014]: I0228 04:35:47.985593 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:47Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.000192 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:47Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.020336 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.035263 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.049292 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2258094-df28-401d-aa20-0931bedcb66b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.064666 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.081680 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.095304 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.111313 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.125594 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719151f8073cf997e9ca1087dcbb47e8f83f0e25a700251737c5a5d4b39bfde4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.139340 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.160081 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e4c9ed4ecb01a842a39956de2f374df82394dd4c46ac4b11eb627ba0cd691ef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"message\\\":\\\"e\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0228 04:35:29.894300 7027 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.171617 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.171623 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:48 crc kubenswrapper[5014]: E0228 04:35:48.171773 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.171619 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.171636 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:48 crc kubenswrapper[5014]: E0228 04:35:48.171930 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:48 crc kubenswrapper[5014]: E0228 04:35:48.172033 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:48 crc kubenswrapper[5014]: E0228 04:35:48.172136 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.182709 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93c2fbd-22ea-4935-9d13-0cff87209a82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed1d996cd1300f3bb16b61b994ef4f54dc63d091344c9f5ba0352c9c0770e8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd97c6cb9aa1bad68bc5df66df1561ea3b9e38dabe1aa17c100f913e6a3e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.198788 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.216682 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.236283 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.245796 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.265343 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.283955 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396380eb-5c77-43a8-9a21-814c3e8888f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714067117f5f948a99fcf525805dbc84659d7486a7d90a54aa54a9b924c7cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7968cb4f05079288706430331f9a9b96767af7ae0cafa8f46bf17c437a39275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 04:33:34.285189 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 04:33:34.288153 1 observer_polling.go:159] Starting file observer\\\\nI0228 04:33:34.331271 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 04:33:34.337611 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 04:34:02.261141 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 04:34:02.261276 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d814b990a8048b944b6ca2b8a1aa5b585368ce3a5d89b7b0993e92a291fa9fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91526622180b27c5558c8064508b1328f9196ac7ffd6f9c5f86bc616d9b6248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2127cf622ac7487c457107170b279eee2bd4abf6ce87378e3b8a17423a25c812\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.301460 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2078c764-fc6e-49ec-a14b-c3ec7a2d5d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849ae83a4bc8172d4bc0c361d8f565dcf9d1d71e833d4d875481bbe8b7eca349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a4e3ad977a015ef15dad7b23433f35d16245c4d2d38b6008a20070f72a20e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0543da82ec1087bd70c14cbb530ed3ee36e372c2d8180bff84cb16d5c4d0d0eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.317458 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.340138 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.359978 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.375781 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2258094-df28-401d-aa20-0931bedcb66b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.836670 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-62hnq_faa5db1f-df50-492a-9d45-d5065bdc63d2/ovnkube-controller/2.log" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.842234 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-62hnq_faa5db1f-df50-492a-9d45-d5065bdc63d2/ovnkube-controller/1.log" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.846222 5014 generic.go:334] "Generic (PLEG): container finished" podID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerID="6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2" exitCode=1 Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.846408 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerDied","Data":"6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2"} Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.846529 5014 scope.go:117] "RemoveContainer" containerID="6e4c9ed4ecb01a842a39956de2f374df82394dd4c46ac4b11eb627ba0cd691ef" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.847598 5014 scope.go:117] "RemoveContainer" containerID="6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2" Feb 28 04:35:48 crc kubenswrapper[5014]: E0228 04:35:48.847951 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-62hnq_openshift-ovn-kubernetes(faa5db1f-df50-492a-9d45-d5065bdc63d2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.867980 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2078c764-fc6e-49ec-a14b-c3ec7a2d5d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849ae83a4bc8172d4bc0c361d8f565dcf9d1d71e833d4d875481bbe8b7eca349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a4e3ad977a015ef15dad7b23433f35d16245c4d2d38b6008a20070f72a20e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0543da82ec1087bd70c14cbb530ed3ee36e372c2d8180bff84cb16d5c4d0d0eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.882175 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.897242 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.911950 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.928396 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.940912 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.961934 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.978344 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396380eb-5c77-43a8-9a21-814c3e8888f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714067117f5f948a99fcf525805dbc84659d7486a7d90a54aa54a9b924c7cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7968cb4f05079288706430331f9a9b96767af7ae0cafa8f46bf17c437a39275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 04:33:34.285189 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 04:33:34.288153 1 observer_polling.go:159] Starting file observer\\\\nI0228 04:33:34.331271 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 04:33:34.337611 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 04:34:02.261141 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 04:34:02.261276 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d814b990a8048b944b6ca2b8a1aa5b585368ce3a5d89b7b0993e92a291fa9fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91526622180b27c5558c8064508b1328f9196ac7ffd6f9c5f86bc616d9b6248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2127cf622ac7487c457107170b279eee2bd4abf6ce87378e3b8a17423a25c812\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:48 crc kubenswrapper[5014]: I0228 04:35:48.997841 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2258094-df28-401d-aa20-0931bedcb66b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:49 crc kubenswrapper[5014]: I0228 04:35:49.017886 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:49Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:49 crc kubenswrapper[5014]: I0228 04:35:49.035827 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:49Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:49 crc kubenswrapper[5014]: I0228 04:35:49.053278 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:49Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:49 crc kubenswrapper[5014]: I0228 04:35:49.068045 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:49Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:49 crc kubenswrapper[5014]: I0228 04:35:49.091685 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6e4c9ed4ecb01a842a39956de2f374df82394dd4c46ac4b11eb627ba0cd691ef\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"message\\\":\\\"e\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-route-controller-manager/route-controller-manager\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.239\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nF0228 04:35:29.894300 7027 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:35:48Z\\\",\\\"message\\\":\\\"ler initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z]\\\\nI0228 04:35:48.215373 7304 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"89fe421e-04e8-4967-ac75-77a0e6f784ef\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:49Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:49 crc kubenswrapper[5014]: I0228 04:35:49.105989 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93c2fbd-22ea-4935-9d13-0cff87209a82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed1d996cd1300f3bb16b61b994ef4f54dc63d091344c9f5ba0352c9c0770e8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd97c6cb9aa1bad68bc5df66df1561ea3b9e38dabe1aa17c100f913e6a3e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:49Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:49 crc kubenswrapper[5014]: I0228 04:35:49.122790 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719151f8073cf997e9ca1087dcbb47e8f83f0e25a700251737c5a5d4b39bfde4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:49Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:49 crc kubenswrapper[5014]: I0228 04:35:49.143032 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:49Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:49 crc kubenswrapper[5014]: I0228 04:35:49.158592 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:49Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:49 crc kubenswrapper[5014]: I0228 04:35:49.175454 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:49Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:49 crc kubenswrapper[5014]: I0228 04:35:49.853994 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-62hnq_faa5db1f-df50-492a-9d45-d5065bdc63d2/ovnkube-controller/2.log" Feb 28 04:35:49 crc kubenswrapper[5014]: I0228 04:35:49.859421 5014 scope.go:117] "RemoveContainer" containerID="6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2" Feb 28 04:35:49 crc kubenswrapper[5014]: E0228 04:35:49.860187 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-62hnq_openshift-ovn-kubernetes(faa5db1f-df50-492a-9d45-d5065bdc63d2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" Feb 28 04:35:49 crc kubenswrapper[5014]: I0228 04:35:49.886056 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:49Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:49 crc kubenswrapper[5014]: I0228 04:35:49.906385 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:49Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:49 crc kubenswrapper[5014]: I0228 04:35:49.927479 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:49Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:49 crc kubenswrapper[5014]: I0228 04:35:49.947545 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:49Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:49 crc kubenswrapper[5014]: I0228 04:35:49.968911 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:49Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:49 crc kubenswrapper[5014]: I0228 04:35:49.991178 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:49Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:50 crc kubenswrapper[5014]: I0228 04:35:50.012702 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:50Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:50 crc kubenswrapper[5014]: I0228 04:35:50.048427 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:50Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:50 crc kubenswrapper[5014]: I0228 04:35:50.070453 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396380eb-5c77-43a8-9a21-814c3e8888f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714067117f5f948a99fcf525805dbc84659d7486a7d90a54aa54a9b924c7cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7968cb4f05079288706430331f9a9b96767af7ae0cafa8f46bf17c437a39275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 04:33:34.285189 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 04:33:34.288153 1 observer_polling.go:159] Starting file observer\\\\nI0228 04:33:34.331271 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 04:33:34.337611 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 04:34:02.261141 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 04:34:02.261276 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d814b990a8048b944b6ca2b8a1aa5b585368ce3a5d89b7b0993e92a291fa9fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91526622180b27c5558c8064508b1328f9196ac7ffd6f9c5f86bc616d9b6248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2127cf622ac7487c457107170b279eee2bd4abf6ce87378e3b8a17423a25c812\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:50Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:50 crc kubenswrapper[5014]: I0228 04:35:50.088100 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2078c764-fc6e-49ec-a14b-c3ec7a2d5d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849ae83a4bc8172d4bc0c361d8f565dcf9d1d71e833d4d875481bbe8b7eca349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a4e3ad977a015ef15dad7b23433f35d16245c4d2d38b6008a20070f72a20e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0543da82ec1087bd70c14cbb530ed3ee36e372c2d8180bff84cb16d5c4d0d0eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:50Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:50 crc kubenswrapper[5014]: I0228 04:35:50.108332 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2258094-df28-401d-aa20-0931bedcb66b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:50Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:50 crc kubenswrapper[5014]: I0228 04:35:50.136907 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:50Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:50 crc kubenswrapper[5014]: I0228 04:35:50.156583 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:50Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:50 crc kubenswrapper[5014]: I0228 04:35:50.170978 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:50 crc kubenswrapper[5014]: I0228 04:35:50.171070 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:50 crc kubenswrapper[5014]: I0228 04:35:50.171106 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:50 crc kubenswrapper[5014]: I0228 04:35:50.171012 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:50 crc kubenswrapper[5014]: E0228 04:35:50.171316 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:50 crc kubenswrapper[5014]: E0228 04:35:50.171416 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:50 crc kubenswrapper[5014]: E0228 04:35:50.171596 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:50 crc kubenswrapper[5014]: E0228 04:35:50.171986 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:35:50 crc kubenswrapper[5014]: I0228 04:35:50.179859 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:50Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:50 crc kubenswrapper[5014]: I0228 04:35:50.196312 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:50Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:50 crc kubenswrapper[5014]: I0228 04:35:50.216205 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93c2fbd-22ea-4935-9d13-0cff87209a82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed1d996cd1300f3bb16b61b994ef4f54dc63d091344c9f5ba0352c9c0770e8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd97c6cb9aa1bad68bc5df66df1561ea3b9e38dabe1aa17c100f913e6a3e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:50Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:50 crc kubenswrapper[5014]: I0228 04:35:50.238521 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719151f8073cf997e9ca1087dcbb47e8f83f0e25a700251737c5a5d4b39bfde4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:50Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:50 crc kubenswrapper[5014]: I0228 04:35:50.261397 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:50Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:50 crc kubenswrapper[5014]: I0228 04:35:50.291470 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:35:48Z\\\",\\\"message\\\":\\\"ler initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z]\\\\nI0228 04:35:48.215373 7304 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"89fe421e-04e8-4967-ac75-77a0e6f784ef\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-62hnq_openshift-ovn-kubernetes(faa5db1f-df50-492a-9d45-d5065bdc63d2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:50Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:52 crc kubenswrapper[5014]: I0228 04:35:52.171135 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:52 crc kubenswrapper[5014]: I0228 04:35:52.171220 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:52 crc kubenswrapper[5014]: I0228 04:35:52.171133 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:52 crc kubenswrapper[5014]: E0228 04:35:52.171337 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:52 crc kubenswrapper[5014]: E0228 04:35:52.171513 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:35:52 crc kubenswrapper[5014]: I0228 04:35:52.171543 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:52 crc kubenswrapper[5014]: E0228 04:35:52.171704 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:52 crc kubenswrapper[5014]: E0228 04:35:52.172008 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:52 crc kubenswrapper[5014]: I0228 04:35:52.194975 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2078c764-fc6e-49ec-a14b-c3ec7a2d5d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849ae83a4bc8172d4bc0c361d8f565dcf9d1d71e833d4d875481bbe8b7eca349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a4e3ad977a015ef15dad7b23433f35d16245c4d2d38b6008a20070f72a20e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0543da82ec1087bd70c14cbb530ed3ee36e372c2d8180bff84cb16d5c4d0d0eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:52Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:52 crc kubenswrapper[5014]: I0228 04:35:52.211523 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:52Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:52 crc kubenswrapper[5014]: I0228 04:35:52.231595 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:52Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:52 crc kubenswrapper[5014]: I0228 04:35:52.251513 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:52Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:52 crc kubenswrapper[5014]: E0228 04:35:52.272614 5014 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 04:35:52 crc kubenswrapper[5014]: I0228 04:35:52.282108 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:52Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:52 crc kubenswrapper[5014]: I0228 04:35:52.304883 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:52Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:52 crc kubenswrapper[5014]: I0228 04:35:52.335309 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:52Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:52 crc kubenswrapper[5014]: I0228 04:35:52.356275 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396380eb-5c77-43a8-9a21-814c3e8888f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714067117f5f948a99fcf525805dbc84659d7486a7d90a54aa54a9b924c7cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7968cb4f05079288706430331f9a9b96767af7ae0cafa8f46bf17c437a39275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 04:33:34.285189 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 04:33:34.288153 1 observer_polling.go:159] Starting file observer\\\\nI0228 04:33:34.331271 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 04:33:34.337611 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 04:34:02.261141 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 04:34:02.261276 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d814b990a8048b944b6ca2b8a1aa5b585368ce3a5d89b7b0993e92a291fa9fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91526622180b27c5558c8064508b1328f9196ac7ffd6f9c5f86bc616d9b6248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2127cf622ac7487c457107170b279eee2bd4abf6ce87378e3b8a17423a25c812\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:52Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:52 crc kubenswrapper[5014]: I0228 04:35:52.373861 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2258094-df28-401d-aa20-0931bedcb66b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:52Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:52 crc kubenswrapper[5014]: I0228 04:35:52.392329 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:52Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:52 crc kubenswrapper[5014]: I0228 04:35:52.412267 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:52Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:52 crc kubenswrapper[5014]: I0228 04:35:52.428887 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:52Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:52 crc kubenswrapper[5014]: I0228 04:35:52.451189 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:52Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:52 crc kubenswrapper[5014]: I0228 04:35:52.486498 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:35:48Z\\\",\\\"message\\\":\\\"ler initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z]\\\\nI0228 04:35:48.215373 7304 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"89fe421e-04e8-4967-ac75-77a0e6f784ef\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-62hnq_openshift-ovn-kubernetes(faa5db1f-df50-492a-9d45-d5065bdc63d2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:52Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:52 crc kubenswrapper[5014]: I0228 04:35:52.505608 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93c2fbd-22ea-4935-9d13-0cff87209a82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed1d996cd1300f3bb16b61b994ef4f54dc63d091344c9f5ba0352c9c0770e8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd97c6cb9aa1bad68bc5df66df1561ea3b9e38dabe1aa17c100f913e6a3e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:52Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:52 crc kubenswrapper[5014]: I0228 04:35:52.528676 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719151f8073cf997e9ca1087dcbb47e8f83f0e25a700251737c5a5d4b39bfde4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:52Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:52 crc kubenswrapper[5014]: I0228 04:35:52.554356 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:52Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:52 crc kubenswrapper[5014]: I0228 04:35:52.575647 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:52Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:52 crc kubenswrapper[5014]: I0228 04:35:52.594141 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:52Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.663027 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.663083 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.663094 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.663112 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.663124 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:53Z","lastTransitionTime":"2026-02-28T04:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:53 crc kubenswrapper[5014]: E0228 04:35:53.676119 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:53Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.679987 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.680107 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.680207 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.680321 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.680406 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:53Z","lastTransitionTime":"2026-02-28T04:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:53 crc kubenswrapper[5014]: E0228 04:35:53.697746 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:53Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.704043 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.704103 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.704119 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.704144 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.704158 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:53Z","lastTransitionTime":"2026-02-28T04:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:53 crc kubenswrapper[5014]: E0228 04:35:53.723391 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:53Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.728898 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.729056 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.729173 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.729321 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.729419 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:53Z","lastTransitionTime":"2026-02-28T04:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:53 crc kubenswrapper[5014]: E0228 04:35:53.751518 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:53Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.758090 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.758118 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.758128 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.758149 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:35:53 crc kubenswrapper[5014]: I0228 04:35:53.758161 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:35:53Z","lastTransitionTime":"2026-02-28T04:35:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:35:53 crc kubenswrapper[5014]: E0228 04:35:53.774238 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:53Z is after 2025-08-24T17:21:41Z" Feb 28 04:35:53 crc kubenswrapper[5014]: E0228 04:35:53.774417 5014 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 04:35:54 crc kubenswrapper[5014]: I0228 04:35:54.170838 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:54 crc kubenswrapper[5014]: E0228 04:35:54.171023 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:54 crc kubenswrapper[5014]: I0228 04:35:54.171065 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:54 crc kubenswrapper[5014]: I0228 04:35:54.171153 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:54 crc kubenswrapper[5014]: E0228 04:35:54.171253 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:35:54 crc kubenswrapper[5014]: E0228 04:35:54.171353 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:54 crc kubenswrapper[5014]: I0228 04:35:54.171419 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:54 crc kubenswrapper[5014]: E0228 04:35:54.171690 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:56 crc kubenswrapper[5014]: I0228 04:35:56.171196 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:56 crc kubenswrapper[5014]: I0228 04:35:56.171258 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:56 crc kubenswrapper[5014]: E0228 04:35:56.172186 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:56 crc kubenswrapper[5014]: I0228 04:35:56.171479 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:56 crc kubenswrapper[5014]: I0228 04:35:56.171321 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:56 crc kubenswrapper[5014]: E0228 04:35:56.172434 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:35:56 crc kubenswrapper[5014]: E0228 04:35:56.172679 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:56 crc kubenswrapper[5014]: E0228 04:35:56.172952 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:57 crc kubenswrapper[5014]: E0228 04:35:57.275028 5014 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 04:35:58 crc kubenswrapper[5014]: I0228 04:35:58.171365 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:35:58 crc kubenswrapper[5014]: I0228 04:35:58.171455 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:35:58 crc kubenswrapper[5014]: I0228 04:35:58.171396 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:35:58 crc kubenswrapper[5014]: I0228 04:35:58.171365 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:35:58 crc kubenswrapper[5014]: E0228 04:35:58.171591 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:35:58 crc kubenswrapper[5014]: E0228 04:35:58.171680 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:35:58 crc kubenswrapper[5014]: E0228 04:35:58.171767 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:35:58 crc kubenswrapper[5014]: E0228 04:35:58.171950 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:00 crc kubenswrapper[5014]: I0228 04:36:00.171116 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:00 crc kubenswrapper[5014]: I0228 04:36:00.171167 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:00 crc kubenswrapper[5014]: I0228 04:36:00.171262 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:00 crc kubenswrapper[5014]: E0228 04:36:00.171273 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:00 crc kubenswrapper[5014]: E0228 04:36:00.171423 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:00 crc kubenswrapper[5014]: I0228 04:36:00.171490 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:00 crc kubenswrapper[5014]: E0228 04:36:00.171636 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:00 crc kubenswrapper[5014]: E0228 04:36:00.171714 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:00 crc kubenswrapper[5014]: I0228 04:36:00.418123 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs\") pod \"network-metrics-daemon-rqllg\" (UID: \"a2258094-df28-401d-aa20-0931bedcb66b\") " pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:00 crc kubenswrapper[5014]: E0228 04:36:00.418388 5014 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 04:36:00 crc kubenswrapper[5014]: E0228 04:36:00.418524 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs podName:a2258094-df28-401d-aa20-0931bedcb66b nodeName:}" failed. No retries permitted until 2026-02-28 04:36:32.418493271 +0000 UTC m=+181.088619231 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs") pod "network-metrics-daemon-rqllg" (UID: "a2258094-df28-401d-aa20-0931bedcb66b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 04:36:02 crc kubenswrapper[5014]: I0228 04:36:02.171735 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:02 crc kubenswrapper[5014]: E0228 04:36:02.172065 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:02 crc kubenswrapper[5014]: I0228 04:36:02.172217 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:02 crc kubenswrapper[5014]: I0228 04:36:02.172416 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:02 crc kubenswrapper[5014]: I0228 04:36:02.172494 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:02 crc kubenswrapper[5014]: E0228 04:36:02.172530 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:02 crc kubenswrapper[5014]: E0228 04:36:02.172677 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:02 crc kubenswrapper[5014]: E0228 04:36:02.172871 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:02 crc kubenswrapper[5014]: I0228 04:36:02.192714 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:02Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:02 crc kubenswrapper[5014]: I0228 04:36:02.214422 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:02Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:02 crc kubenswrapper[5014]: I0228 04:36:02.232283 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396380eb-5c77-43a8-9a21-814c3e8888f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714067117f5f948a99fcf525805dbc84659d7486a7d90a54aa54a9b924c7cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7968cb4f05079288706430331f9a9b96767af7ae0cafa8f46bf17c437a39275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 04:33:34.285189 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 04:33:34.288153 1 observer_polling.go:159] Starting file observer\\\\nI0228 04:33:34.331271 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 04:33:34.337611 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 04:34:02.261141 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 04:34:02.261276 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d814b990a8048b944b6ca2b8a1aa5b585368ce3a5d89b7b0993e92a291fa9fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91526622180b27c5558c8064508b1328f9196ac7ffd6f9c5f86bc616d9b6248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2127cf622ac7487c457107170b279eee2bd4abf6ce87378e3b8a17423a25c812\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:02Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:02 crc kubenswrapper[5014]: I0228 04:36:02.252055 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2078c764-fc6e-49ec-a14b-c3ec7a2d5d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849ae83a4bc8172d4bc0c361d8f565dcf9d1d71e833d4d875481bbe8b7eca349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a4e3ad977a015ef15dad7b23433f35d16245c4d2d38b6008a20070f72a20e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0543da82ec1087bd70c14cbb530ed3ee36e372c2d8180bff84cb16d5c4d0d0eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:02Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:02 crc kubenswrapper[5014]: I0228 04:36:02.273434 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:02Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:02 crc kubenswrapper[5014]: E0228 04:36:02.276027 5014 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 04:36:02 crc kubenswrapper[5014]: I0228 04:36:02.298227 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:02Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:02 crc kubenswrapper[5014]: I0228 04:36:02.319241 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:02Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:02 crc kubenswrapper[5014]: I0228 04:36:02.336890 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:02Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:02 crc kubenswrapper[5014]: I0228 04:36:02.352194 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:02Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:02 crc kubenswrapper[5014]: I0228 04:36:02.380531 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:02Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:02 crc kubenswrapper[5014]: I0228 04:36:02.396310 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2258094-df28-401d-aa20-0931bedcb66b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:02Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:02 crc kubenswrapper[5014]: I0228 04:36:02.412467 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:02Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:02 crc kubenswrapper[5014]: I0228 04:36:02.428588 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:02Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:02 crc kubenswrapper[5014]: I0228 04:36:02.450041 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:02Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:02 crc kubenswrapper[5014]: I0228 04:36:02.469340 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:02Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:02 crc kubenswrapper[5014]: I0228 04:36:02.490294 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:02Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:02 crc kubenswrapper[5014]: I0228 04:36:02.522865 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:35:48Z\\\",\\\"message\\\":\\\"ler initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z]\\\\nI0228 04:35:48.215373 7304 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"89fe421e-04e8-4967-ac75-77a0e6f784ef\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-62hnq_openshift-ovn-kubernetes(faa5db1f-df50-492a-9d45-d5065bdc63d2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:02Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:02 crc kubenswrapper[5014]: I0228 04:36:02.541332 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93c2fbd-22ea-4935-9d13-0cff87209a82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed1d996cd1300f3bb16b61b994ef4f54dc63d091344c9f5ba0352c9c0770e8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd97c6cb9aa1bad68bc5df66df1561ea3b9e38dabe1aa17c100f913e6a3e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:02Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:02 crc kubenswrapper[5014]: I0228 04:36:02.562981 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719151f8073cf997e9ca1087dcbb47e8f83f0e25a700251737c5a5d4b39bfde4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:02Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:03 crc kubenswrapper[5014]: I0228 04:36:03.921557 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8xzmq_08c35a73-dfa6-4097-beb4-3a6d4f419559/kube-multus/0.log" Feb 28 04:36:03 crc kubenswrapper[5014]: I0228 04:36:03.921633 5014 generic.go:334] "Generic (PLEG): container finished" podID="08c35a73-dfa6-4097-beb4-3a6d4f419559" containerID="591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c" exitCode=1 Feb 28 04:36:03 crc kubenswrapper[5014]: I0228 04:36:03.921679 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8xzmq" event={"ID":"08c35a73-dfa6-4097-beb4-3a6d4f419559","Type":"ContainerDied","Data":"591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c"} Feb 28 04:36:03 crc kubenswrapper[5014]: I0228 04:36:03.922258 5014 scope.go:117] "RemoveContainer" containerID="591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c" Feb 28 04:36:03 crc kubenswrapper[5014]: I0228 04:36:03.942542 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:03Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:03 crc kubenswrapper[5014]: I0228 04:36:03.967386 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:03Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:03 crc kubenswrapper[5014]: I0228 04:36:03.983089 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:03Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:03 crc kubenswrapper[5014]: I0228 04:36:03.983478 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:36:03 crc kubenswrapper[5014]: I0228 04:36:03.983521 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:36:03 crc kubenswrapper[5014]: I0228 04:36:03.983530 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:36:03 crc kubenswrapper[5014]: I0228 04:36:03.983547 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:36:03 crc kubenswrapper[5014]: I0228 04:36:03.983558 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:36:03Z","lastTransitionTime":"2026-02-28T04:36:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:36:04 crc kubenswrapper[5014]: E0228 04:36:04.003378 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:03Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:03Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.005784 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:04Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.010393 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.010455 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.010470 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.010491 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.010503 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:36:04Z","lastTransitionTime":"2026-02-28T04:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.025506 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719151f8073cf997e9ca1087dcbb47e8f83f0e25a700251737c5a5d4b39bfde4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:04Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:04 crc kubenswrapper[5014]: E0228 04:36:04.026927 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:04Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.032049 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.032127 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.032138 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.032157 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.032167 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:36:04Z","lastTransitionTime":"2026-02-28T04:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.042990 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:03Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:36:03Z\\\",\\\"message\\\":\\\"2026-02-28T04:35:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d66d5acd-f1bc-4d07-b21a-ef57cd9768dc\\\\n2026-02-28T04:35:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d66d5acd-f1bc-4d07-b21a-ef57cd9768dc to /host/opt/cni/bin/\\\\n2026-02-28T04:35:18Z [verbose] multus-daemon started\\\\n2026-02-28T04:35:18Z [verbose] Readiness Indicator file check\\\\n2026-02-28T04:36:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:04Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:04 crc kubenswrapper[5014]: E0228 04:36:04.046203 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:04Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.053621 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.053669 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.053682 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.053700 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.053711 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:36:04Z","lastTransitionTime":"2026-02-28T04:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.061909 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.062089 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:04 crc kubenswrapper[5014]: E0228 04:36:04.062110 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:08.062079432 +0000 UTC m=+216.732205392 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:36:04 crc kubenswrapper[5014]: E0228 04:36:04.062205 5014 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 04:36:04 crc kubenswrapper[5014]: E0228 04:36:04.062282 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 04:37:08.062261867 +0000 UTC m=+216.732387787 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.062333 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:04 crc kubenswrapper[5014]: E0228 04:36:04.062447 5014 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 04:36:04 crc kubenswrapper[5014]: E0228 04:36:04.062486 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 04:37:08.062477543 +0000 UTC m=+216.732603463 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.075337 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:35:48Z\\\",\\\"message\\\":\\\"ler initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z]\\\\nI0228 04:35:48.215373 7304 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"89fe421e-04e8-4967-ac75-77a0e6f784ef\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-62hnq_openshift-ovn-kubernetes(faa5db1f-df50-492a-9d45-d5065bdc63d2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:04Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:04 crc kubenswrapper[5014]: E0228 04:36:04.081251 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:04Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.090599 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.090667 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.090681 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.090708 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.090724 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:36:04Z","lastTransitionTime":"2026-02-28T04:36:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.107251 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93c2fbd-22ea-4935-9d13-0cff87209a82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed1d996cd1300f3bb16b61b994ef4f54dc63d091344c9f5ba0352c9c0770e8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd97c6cb9aa1bad68bc5df66df1561ea3b9e38dabe1aa17c100f913e6a3e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:04Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:04 crc kubenswrapper[5014]: E0228 04:36:04.115456 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:04Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:04 crc kubenswrapper[5014]: E0228 04:36:04.115611 5014 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.125997 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:04Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.150698 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:04Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.163450 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.163546 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:04 crc kubenswrapper[5014]: E0228 04:36:04.163667 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 04:36:04 crc kubenswrapper[5014]: E0228 04:36:04.163686 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 04:36:04 crc kubenswrapper[5014]: E0228 04:36:04.163699 5014 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:36:04 crc kubenswrapper[5014]: E0228 04:36:04.163667 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 04:36:04 crc kubenswrapper[5014]: E0228 04:36:04.163777 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 04:36:04 crc kubenswrapper[5014]: E0228 04:36:04.163793 5014 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:36:04 crc kubenswrapper[5014]: E0228 04:36:04.163753 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 04:37:08.163735882 +0000 UTC m=+216.833861782 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:36:04 crc kubenswrapper[5014]: E0228 04:36:04.163860 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 04:37:08.163848556 +0000 UTC m=+216.833974466 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.171539 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:04Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.172360 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.172487 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.172517 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:04 crc kubenswrapper[5014]: E0228 04:36:04.175302 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:04 crc kubenswrapper[5014]: E0228 04:36:04.175392 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.172618 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:04 crc kubenswrapper[5014]: E0228 04:36:04.175482 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.173403 5014 scope.go:117] "RemoveContainer" containerID="6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2" Feb 28 04:36:04 crc kubenswrapper[5014]: E0228 04:36:04.175764 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-62hnq_openshift-ovn-kubernetes(faa5db1f-df50-492a-9d45-d5065bdc63d2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" Feb 28 04:36:04 crc kubenswrapper[5014]: E0228 04:36:04.175870 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.192575 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:04Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.217307 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:04Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.233872 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396380eb-5c77-43a8-9a21-814c3e8888f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714067117f5f948a99fcf525805dbc84659d7486a7d90a54aa54a9b924c7cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7968cb4f05079288706430331f9a9b96767af7ae0cafa8f46bf17c437a39275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 04:33:34.285189 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 04:33:34.288153 1 observer_polling.go:159] Starting file observer\\\\nI0228 04:33:34.331271 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 04:33:34.337611 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 04:34:02.261141 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 04:34:02.261276 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d814b990a8048b944b6ca2b8a1aa5b585368ce3a5d89b7b0993e92a291fa9fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91526622180b27c5558c8064508b1328f9196ac7ffd6f9c5f86bc616d9b6248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2127cf622ac7487c457107170b279eee2bd4abf6ce87378e3b8a17423a25c812\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:04Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.251071 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2078c764-fc6e-49ec-a14b-c3ec7a2d5d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849ae83a4bc8172d4bc0c361d8f565dcf9d1d71e833d4d875481bbe8b7eca349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a4e3ad977a015ef15dad7b23433f35d16245c4d2d38b6008a20070f72a20e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0543da82ec1087bd70c14cbb530ed3ee36e372c2d8180bff84cb16d5c4d0d0eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:04Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.266588 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:04Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.283213 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:04Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.296476 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:04Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.311171 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2258094-df28-401d-aa20-0931bedcb66b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:04Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.927102 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8xzmq_08c35a73-dfa6-4097-beb4-3a6d4f419559/kube-multus/0.log" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.927168 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8xzmq" event={"ID":"08c35a73-dfa6-4097-beb4-3a6d4f419559","Type":"ContainerStarted","Data":"46118cb340244cc019714cbd9e95c064aa047e8ce68cbb3a667a52b402ae00bb"} Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.946111 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719151f8073cf997e9ca1087dcbb47e8f83f0e25a700251737c5a5d4b39bfde4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:04Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.963658 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46118cb340244cc019714cbd9e95c064aa047e8ce68cbb3a667a52b402ae00bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:36:03Z\\\",\\\"message\\\":\\\"2026-02-28T04:35:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d66d5acd-f1bc-4d07-b21a-ef57cd9768dc\\\\n2026-02-28T04:35:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d66d5acd-f1bc-4d07-b21a-ef57cd9768dc to /host/opt/cni/bin/\\\\n2026-02-28T04:35:18Z [verbose] multus-daemon started\\\\n2026-02-28T04:35:18Z [verbose] Readiness Indicator file check\\\\n2026-02-28T04:36:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:04Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:04 crc kubenswrapper[5014]: I0228 04:36:04.988119 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:35:48Z\\\",\\\"message\\\":\\\"ler initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z]\\\\nI0228 04:35:48.215373 7304 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"89fe421e-04e8-4967-ac75-77a0e6f784ef\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-62hnq_openshift-ovn-kubernetes(faa5db1f-df50-492a-9d45-d5065bdc63d2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:04Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:05 crc kubenswrapper[5014]: I0228 04:36:05.004613 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93c2fbd-22ea-4935-9d13-0cff87209a82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed1d996cd1300f3bb16b61b994ef4f54dc63d091344c9f5ba0352c9c0770e8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd97c6cb9aa1bad68bc5df66df1561ea3b9e38dabe1aa17c100f913e6a3e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:05Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:05 crc kubenswrapper[5014]: I0228 04:36:05.022237 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:05Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:05 crc kubenswrapper[5014]: I0228 04:36:05.039330 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:05Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:05 crc kubenswrapper[5014]: I0228 04:36:05.053410 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:05Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:05 crc kubenswrapper[5014]: I0228 04:36:05.076226 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:05Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:05 crc kubenswrapper[5014]: I0228 04:36:05.096848 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396380eb-5c77-43a8-9a21-814c3e8888f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714067117f5f948a99fcf525805dbc84659d7486a7d90a54aa54a9b924c7cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7968cb4f05079288706430331f9a9b96767af7ae0cafa8f46bf17c437a39275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 04:33:34.285189 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 04:33:34.288153 1 observer_polling.go:159] Starting file observer\\\\nI0228 04:33:34.331271 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 04:33:34.337611 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 04:34:02.261141 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 04:34:02.261276 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d814b990a8048b944b6ca2b8a1aa5b585368ce3a5d89b7b0993e92a291fa9fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91526622180b27c5558c8064508b1328f9196ac7ffd6f9c5f86bc616d9b6248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2127cf622ac7487c457107170b279eee2bd4abf6ce87378e3b8a17423a25c812\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:05Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:05 crc kubenswrapper[5014]: I0228 04:36:05.119034 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2078c764-fc6e-49ec-a14b-c3ec7a2d5d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849ae83a4bc8172d4bc0c361d8f565dcf9d1d71e833d4d875481bbe8b7eca349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a4e3ad977a015ef15dad7b23433f35d16245c4d2d38b6008a20070f72a20e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0543da82ec1087bd70c14cbb530ed3ee36e372c2d8180bff84cb16d5c4d0d0eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:05Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:05 crc kubenswrapper[5014]: I0228 04:36:05.133599 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:05Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:05 crc kubenswrapper[5014]: I0228 04:36:05.151224 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:05Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:05 crc kubenswrapper[5014]: I0228 04:36:05.164093 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:05Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:05 crc kubenswrapper[5014]: I0228 04:36:05.185502 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:05Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:05 crc kubenswrapper[5014]: I0228 04:36:05.201008 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2258094-df28-401d-aa20-0931bedcb66b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:05Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:05 crc kubenswrapper[5014]: I0228 04:36:05.218564 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:05Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:05 crc kubenswrapper[5014]: I0228 04:36:05.234361 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:05Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:05 crc kubenswrapper[5014]: I0228 04:36:05.255119 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:05Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:05 crc kubenswrapper[5014]: I0228 04:36:05.274478 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:05Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:06 crc kubenswrapper[5014]: I0228 04:36:06.707048 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:06 crc kubenswrapper[5014]: I0228 04:36:06.707205 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:06 crc kubenswrapper[5014]: E0228 04:36:06.707373 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:06 crc kubenswrapper[5014]: I0228 04:36:06.707418 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:06 crc kubenswrapper[5014]: E0228 04:36:06.707675 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:06 crc kubenswrapper[5014]: E0228 04:36:06.708147 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:06 crc kubenswrapper[5014]: I0228 04:36:06.708254 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:06 crc kubenswrapper[5014]: E0228 04:36:06.708351 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:07 crc kubenswrapper[5014]: E0228 04:36:07.278150 5014 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 04:36:08 crc kubenswrapper[5014]: I0228 04:36:08.171543 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:08 crc kubenswrapper[5014]: I0228 04:36:08.171604 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:08 crc kubenswrapper[5014]: E0228 04:36:08.171987 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:08 crc kubenswrapper[5014]: I0228 04:36:08.172049 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:08 crc kubenswrapper[5014]: E0228 04:36:08.172273 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:08 crc kubenswrapper[5014]: E0228 04:36:08.172351 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:08 crc kubenswrapper[5014]: I0228 04:36:08.172872 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:08 crc kubenswrapper[5014]: E0228 04:36:08.173058 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:10 crc kubenswrapper[5014]: I0228 04:36:10.170708 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:10 crc kubenswrapper[5014]: I0228 04:36:10.170720 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:10 crc kubenswrapper[5014]: E0228 04:36:10.170916 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:10 crc kubenswrapper[5014]: I0228 04:36:10.170765 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:10 crc kubenswrapper[5014]: I0228 04:36:10.170708 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:10 crc kubenswrapper[5014]: E0228 04:36:10.171048 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:10 crc kubenswrapper[5014]: E0228 04:36:10.171141 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:10 crc kubenswrapper[5014]: E0228 04:36:10.171280 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:12 crc kubenswrapper[5014]: I0228 04:36:12.170680 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:12 crc kubenswrapper[5014]: I0228 04:36:12.170709 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:12 crc kubenswrapper[5014]: E0228 04:36:12.170791 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:12 crc kubenswrapper[5014]: I0228 04:36:12.170826 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:12 crc kubenswrapper[5014]: E0228 04:36:12.170882 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:12 crc kubenswrapper[5014]: I0228 04:36:12.170935 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:12 crc kubenswrapper[5014]: E0228 04:36:12.170957 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:12 crc kubenswrapper[5014]: E0228 04:36:12.171002 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:12 crc kubenswrapper[5014]: I0228 04:36:12.182573 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:12Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:12 crc kubenswrapper[5014]: I0228 04:36:12.191622 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:12Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:12 crc kubenswrapper[5014]: I0228 04:36:12.201862 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:12Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:12 crc kubenswrapper[5014]: I0228 04:36:12.219069 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:12Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:12 crc kubenswrapper[5014]: I0228 04:36:12.232375 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:12Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:12 crc kubenswrapper[5014]: I0228 04:36:12.248543 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:12Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:12 crc kubenswrapper[5014]: I0228 04:36:12.259715 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:12Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:12 crc kubenswrapper[5014]: E0228 04:36:12.278822 5014 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 04:36:12 crc kubenswrapper[5014]: I0228 04:36:12.280852 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:12Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:12 crc kubenswrapper[5014]: I0228 04:36:12.295683 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396380eb-5c77-43a8-9a21-814c3e8888f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714067117f5f948a99fcf525805dbc84659d7486a7d90a54aa54a9b924c7cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7968cb4f05079288706430331f9a9b96767af7ae0cafa8f46bf17c437a39275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 04:33:34.285189 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 04:33:34.288153 1 observer_polling.go:159] Starting file observer\\\\nI0228 04:33:34.331271 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 04:33:34.337611 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 04:34:02.261141 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 04:34:02.261276 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d814b990a8048b944b6ca2b8a1aa5b585368ce3a5d89b7b0993e92a291fa9fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91526622180b27c5558c8064508b1328f9196ac7ffd6f9c5f86bc616d9b6248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2127cf622ac7487c457107170b279eee2bd4abf6ce87378e3b8a17423a25c812\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:12Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:12 crc kubenswrapper[5014]: I0228 04:36:12.306370 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2078c764-fc6e-49ec-a14b-c3ec7a2d5d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849ae83a4bc8172d4bc0c361d8f565dcf9d1d71e833d4d875481bbe8b7eca349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a4e3ad977a015ef15dad7b23433f35d16245c4d2d38b6008a20070f72a20e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0543da82ec1087bd70c14cbb530ed3ee36e372c2d8180bff84cb16d5c4d0d0eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:12Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:12 crc kubenswrapper[5014]: I0228 04:36:12.316704 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2258094-df28-401d-aa20-0931bedcb66b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:12Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:12 crc kubenswrapper[5014]: I0228 04:36:12.328796 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:12Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:12 crc kubenswrapper[5014]: I0228 04:36:12.341713 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:12Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:12 crc kubenswrapper[5014]: I0228 04:36:12.356243 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:12Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:12 crc kubenswrapper[5014]: I0228 04:36:12.373528 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:12Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:12 crc kubenswrapper[5014]: I0228 04:36:12.386161 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93c2fbd-22ea-4935-9d13-0cff87209a82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed1d996cd1300f3bb16b61b994ef4f54dc63d091344c9f5ba0352c9c0770e8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd97c6cb9aa1bad68bc5df66df1561ea3b9e38dabe1aa17c100f913e6a3e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:12Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:12 crc kubenswrapper[5014]: I0228 04:36:12.401768 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719151f8073cf997e9ca1087dcbb47e8f83f0e25a700251737c5a5d4b39bfde4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:12Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:12 crc kubenswrapper[5014]: I0228 04:36:12.419411 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46118cb340244cc019714cbd9e95c064aa047e8ce68cbb3a667a52b402ae00bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:36:03Z\\\",\\\"message\\\":\\\"2026-02-28T04:35:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d66d5acd-f1bc-4d07-b21a-ef57cd9768dc\\\\n2026-02-28T04:35:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d66d5acd-f1bc-4d07-b21a-ef57cd9768dc to /host/opt/cni/bin/\\\\n2026-02-28T04:35:18Z [verbose] multus-daemon started\\\\n2026-02-28T04:35:18Z [verbose] Readiness Indicator file check\\\\n2026-02-28T04:36:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:12Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:12 crc kubenswrapper[5014]: I0228 04:36:12.438135 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:35:48Z\\\",\\\"message\\\":\\\"ler initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z]\\\\nI0228 04:35:48.215373 7304 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"89fe421e-04e8-4967-ac75-77a0e6f784ef\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-62hnq_openshift-ovn-kubernetes(faa5db1f-df50-492a-9d45-d5065bdc63d2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:12Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.171001 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:14 crc kubenswrapper[5014]: E0228 04:36:14.171172 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.171421 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.171536 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:14 crc kubenswrapper[5014]: E0228 04:36:14.171668 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.172223 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:14 crc kubenswrapper[5014]: E0228 04:36:14.172299 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:14 crc kubenswrapper[5014]: E0228 04:36:14.172408 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.380238 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.380541 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.380627 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.380696 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.380755 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:36:14Z","lastTransitionTime":"2026-02-28T04:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:36:14 crc kubenswrapper[5014]: E0228 04:36:14.395473 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:14Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.399974 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.399994 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.400001 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.400013 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.400022 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:36:14Z","lastTransitionTime":"2026-02-28T04:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:36:14 crc kubenswrapper[5014]: E0228 04:36:14.417244 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:14Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.422100 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.422137 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.422145 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.422168 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.422177 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:36:14Z","lastTransitionTime":"2026-02-28T04:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:36:14 crc kubenswrapper[5014]: E0228 04:36:14.437739 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:14Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.443414 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.443452 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.443463 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.443481 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.443495 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:36:14Z","lastTransitionTime":"2026-02-28T04:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:36:14 crc kubenswrapper[5014]: E0228 04:36:14.461535 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:14Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.466577 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.466860 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.467021 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.467160 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:36:14 crc kubenswrapper[5014]: I0228 04:36:14.467284 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:36:14Z","lastTransitionTime":"2026-02-28T04:36:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:36:14 crc kubenswrapper[5014]: E0228 04:36:14.486703 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:14Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:14 crc kubenswrapper[5014]: E0228 04:36:14.486896 5014 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 04:36:16 crc kubenswrapper[5014]: I0228 04:36:16.171266 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:16 crc kubenswrapper[5014]: I0228 04:36:16.171266 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:16 crc kubenswrapper[5014]: I0228 04:36:16.171463 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:16 crc kubenswrapper[5014]: E0228 04:36:16.171548 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:16 crc kubenswrapper[5014]: I0228 04:36:16.171606 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:16 crc kubenswrapper[5014]: E0228 04:36:16.171759 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:16 crc kubenswrapper[5014]: E0228 04:36:16.171995 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:16 crc kubenswrapper[5014]: E0228 04:36:16.172064 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:17 crc kubenswrapper[5014]: E0228 04:36:17.280536 5014 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 04:36:18 crc kubenswrapper[5014]: I0228 04:36:18.171844 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:18 crc kubenswrapper[5014]: I0228 04:36:18.171987 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:18 crc kubenswrapper[5014]: E0228 04:36:18.172032 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:18 crc kubenswrapper[5014]: I0228 04:36:18.171989 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:18 crc kubenswrapper[5014]: I0228 04:36:18.172196 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:18 crc kubenswrapper[5014]: E0228 04:36:18.172147 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:18 crc kubenswrapper[5014]: E0228 04:36:18.172351 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:18 crc kubenswrapper[5014]: E0228 04:36:18.172440 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:19 crc kubenswrapper[5014]: I0228 04:36:19.171772 5014 scope.go:117] "RemoveContainer" containerID="6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2" Feb 28 04:36:19 crc kubenswrapper[5014]: I0228 04:36:19.979823 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-62hnq_faa5db1f-df50-492a-9d45-d5065bdc63d2/ovnkube-controller/2.log" Feb 28 04:36:19 crc kubenswrapper[5014]: I0228 04:36:19.982901 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerStarted","Data":"f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328"} Feb 28 04:36:19 crc kubenswrapper[5014]: I0228 04:36:19.983408 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:36:19 crc kubenswrapper[5014]: I0228 04:36:19.995392 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:19Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.006059 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.018427 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396380eb-5c77-43a8-9a21-814c3e8888f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714067117f5f948a99fcf525805dbc84659d7486a7d90a54aa54a9b924c7cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7968cb4f05079288706430331f9a9b96767af7ae0cafa8f46bf17c437a39275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 04:33:34.285189 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 04:33:34.288153 1 observer_polling.go:159] Starting file observer\\\\nI0228 04:33:34.331271 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 04:33:34.337611 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 04:34:02.261141 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 04:34:02.261276 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d814b990a8048b944b6ca2b8a1aa5b585368ce3a5d89b7b0993e92a291fa9fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91526622180b27c5558c8064508b1328f9196ac7ffd6f9c5f86bc616d9b6248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2127cf622ac7487c457107170b279eee2bd4abf6ce87378e3b8a17423a25c812\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.030412 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2078c764-fc6e-49ec-a14b-c3ec7a2d5d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849ae83a4bc8172d4bc0c361d8f565dcf9d1d71e833d4d875481bbe8b7eca349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a4e3ad977a015ef15dad7b23433f35d16245c4d2d38b6008a20070f72a20e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0543da82ec1087bd70c14cbb530ed3ee36e372c2d8180bff84cb16d5c4d0d0eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.043201 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.065920 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.081643 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.100513 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.112896 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.132296 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.146372 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2258094-df28-401d-aa20-0931bedcb66b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.163420 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.171214 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.171409 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:20 crc kubenswrapper[5014]: E0228 04:36:20.171637 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.171711 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.171691 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:20 crc kubenswrapper[5014]: E0228 04:36:20.171787 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:20 crc kubenswrapper[5014]: E0228 04:36:20.172000 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:20 crc kubenswrapper[5014]: E0228 04:36:20.172090 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.184562 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.203026 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.218898 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.238264 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46118cb340244cc019714cbd9e95c064aa047e8ce68cbb3a667a52b402ae00bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:36:03Z\\\",\\\"message\\\":\\\"2026-02-28T04:35:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d66d5acd-f1bc-4d07-b21a-ef57cd9768dc\\\\n2026-02-28T04:35:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d66d5acd-f1bc-4d07-b21a-ef57cd9768dc to /host/opt/cni/bin/\\\\n2026-02-28T04:35:18Z [verbose] multus-daemon started\\\\n2026-02-28T04:35:18Z [verbose] Readiness Indicator file check\\\\n2026-02-28T04:36:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.257729 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:35:48Z\\\",\\\"message\\\":\\\"ler initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z]\\\\nI0228 04:35:48.215373 7304 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"89fe421e-04e8-4967-ac75-77a0e6f784ef\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:36:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.272306 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93c2fbd-22ea-4935-9d13-0cff87209a82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed1d996cd1300f3bb16b61b994ef4f54dc63d091344c9f5ba0352c9c0770e8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd97c6cb9aa1bad68bc5df66df1561ea3b9e38dabe1aa17c100f913e6a3e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.288914 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719151f8073cf997e9ca1087dcbb47e8f83f0e25a700251737c5a5d4b39bfde4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:20Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.991281 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-62hnq_faa5db1f-df50-492a-9d45-d5065bdc63d2/ovnkube-controller/3.log" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.992965 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-62hnq_faa5db1f-df50-492a-9d45-d5065bdc63d2/ovnkube-controller/2.log" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.997919 5014 generic.go:334] "Generic (PLEG): container finished" podID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerID="f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328" exitCode=1 Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.998006 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerDied","Data":"f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328"} Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.998072 5014 scope.go:117] "RemoveContainer" containerID="6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2" Feb 28 04:36:20 crc kubenswrapper[5014]: I0228 04:36:20.999380 5014 scope.go:117] "RemoveContainer" containerID="f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328" Feb 28 04:36:20 crc kubenswrapper[5014]: E0228 04:36:20.999918 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-62hnq_openshift-ovn-kubernetes(faa5db1f-df50-492a-9d45-d5065bdc63d2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" Feb 28 04:36:21 crc kubenswrapper[5014]: I0228 04:36:21.031763 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719151f8073cf997e9ca1087dcbb47e8f83f0e25a700251737c5a5d4b39bfde4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:21 crc kubenswrapper[5014]: I0228 04:36:21.054096 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46118cb340244cc019714cbd9e95c064aa047e8ce68cbb3a667a52b402ae00bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:36:03Z\\\",\\\"message\\\":\\\"2026-02-28T04:35:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d66d5acd-f1bc-4d07-b21a-ef57cd9768dc\\\\n2026-02-28T04:35:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d66d5acd-f1bc-4d07-b21a-ef57cd9768dc to /host/opt/cni/bin/\\\\n2026-02-28T04:35:18Z [verbose] multus-daemon started\\\\n2026-02-28T04:35:18Z [verbose] Readiness Indicator file check\\\\n2026-02-28T04:36:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:21 crc kubenswrapper[5014]: I0228 04:36:21.083048 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6f0c74249d77ec0ad313792cad1cf7d8c29f8cf913dd65f97fc0f3caca6285e2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:35:48Z\\\",\\\"message\\\":\\\"ler initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:35:48Z is after 2025-08-24T17:21:41Z]\\\\nI0228 04:35:48.215373 7304 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUID:\\\\\\\"89fe421e-04e8-4967-ac75-77a0e6f784ef\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-marketplace/marketplace-operator-metrics\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-marketplace/marketplace-operator-metrics_TCP_cluster\\\\\\\", UUI\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:36:20Z\\\",\\\"message\\\":\\\"e reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 04:36:20.180943 7572 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 04:36:20.181069 7572 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0228 04:36:20.179119 7572 services_controller.go:356] Processing sync for service openshift-marketplace/community-operators for network=default\\\\nF0228 04:36:20.181173 7572 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:36:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:21 crc kubenswrapper[5014]: I0228 04:36:21.100104 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93c2fbd-22ea-4935-9d13-0cff87209a82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed1d996cd1300f3bb16b61b994ef4f54dc63d091344c9f5ba0352c9c0770e8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd97c6cb9aa1bad68bc5df66df1561ea3b9e38dabe1aa17c100f913e6a3e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:21 crc kubenswrapper[5014]: I0228 04:36:21.124127 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:21 crc kubenswrapper[5014]: I0228 04:36:21.139451 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:21 crc kubenswrapper[5014]: I0228 04:36:21.156260 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:21 crc kubenswrapper[5014]: I0228 04:36:21.178089 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:21 crc kubenswrapper[5014]: I0228 04:36:21.204972 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:21 crc kubenswrapper[5014]: I0228 04:36:21.223158 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:21 crc kubenswrapper[5014]: I0228 04:36:21.252852 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:21 crc kubenswrapper[5014]: I0228 04:36:21.275661 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396380eb-5c77-43a8-9a21-814c3e8888f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714067117f5f948a99fcf525805dbc84659d7486a7d90a54aa54a9b924c7cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7968cb4f05079288706430331f9a9b96767af7ae0cafa8f46bf17c437a39275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 04:33:34.285189 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 04:33:34.288153 1 observer_polling.go:159] Starting file observer\\\\nI0228 04:33:34.331271 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 04:33:34.337611 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 04:34:02.261141 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 04:34:02.261276 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d814b990a8048b944b6ca2b8a1aa5b585368ce3a5d89b7b0993e92a291fa9fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91526622180b27c5558c8064508b1328f9196ac7ffd6f9c5f86bc616d9b6248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2127cf622ac7487c457107170b279eee2bd4abf6ce87378e3b8a17423a25c812\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:21 crc kubenswrapper[5014]: I0228 04:36:21.299377 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2078c764-fc6e-49ec-a14b-c3ec7a2d5d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849ae83a4bc8172d4bc0c361d8f565dcf9d1d71e833d4d875481bbe8b7eca349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a4e3ad977a015ef15dad7b23433f35d16245c4d2d38b6008a20070f72a20e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0543da82ec1087bd70c14cbb530ed3ee36e372c2d8180bff84cb16d5c4d0d0eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:21 crc kubenswrapper[5014]: I0228 04:36:21.322207 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:21 crc kubenswrapper[5014]: I0228 04:36:21.339549 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2258094-df28-401d-aa20-0931bedcb66b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:21 crc kubenswrapper[5014]: I0228 04:36:21.360602 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:21 crc kubenswrapper[5014]: I0228 04:36:21.385628 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:21 crc kubenswrapper[5014]: I0228 04:36:21.407157 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:21 crc kubenswrapper[5014]: I0228 04:36:21.429927 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:21Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.004107 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-62hnq_faa5db1f-df50-492a-9d45-d5065bdc63d2/ovnkube-controller/3.log" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.009303 5014 scope.go:117] "RemoveContainer" containerID="f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328" Feb 28 04:36:22 crc kubenswrapper[5014]: E0228 04:36:22.009695 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-62hnq_openshift-ovn-kubernetes(faa5db1f-df50-492a-9d45-d5065bdc63d2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.027977 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.044791 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.064527 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.084911 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.108789 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.132484 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396380eb-5c77-43a8-9a21-814c3e8888f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714067117f5f948a99fcf525805dbc84659d7486a7d90a54aa54a9b924c7cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7968cb4f05079288706430331f9a9b96767af7ae0cafa8f46bf17c437a39275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 04:33:34.285189 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 04:33:34.288153 1 observer_polling.go:159] Starting file observer\\\\nI0228 04:33:34.331271 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 04:33:34.337611 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 04:34:02.261141 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 04:34:02.261276 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d814b990a8048b944b6ca2b8a1aa5b585368ce3a5d89b7b0993e92a291fa9fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91526622180b27c5558c8064508b1328f9196ac7ffd6f9c5f86bc616d9b6248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2127cf622ac7487c457107170b279eee2bd4abf6ce87378e3b8a17423a25c812\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.149914 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2078c764-fc6e-49ec-a14b-c3ec7a2d5d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849ae83a4bc8172d4bc0c361d8f565dcf9d1d71e833d4d875481bbe8b7eca349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a4e3ad977a015ef15dad7b23433f35d16245c4d2d38b6008a20070f72a20e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0543da82ec1087bd70c14cbb530ed3ee36e372c2d8180bff84cb16d5c4d0d0eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.169242 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.171409 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.171494 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.171585 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.171886 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:22 crc kubenswrapper[5014]: E0228 04:36:22.171875 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:22 crc kubenswrapper[5014]: E0228 04:36:22.172094 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:22 crc kubenswrapper[5014]: E0228 04:36:22.172241 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:22 crc kubenswrapper[5014]: E0228 04:36:22.172403 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.193157 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.206538 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.217577 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2258094-df28-401d-aa20-0931bedcb66b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.230397 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.247447 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.261179 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: E0228 04:36:22.281039 5014 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.281913 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.305201 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719151f8073cf997e9ca1087dcbb47e8f83f0e25a700251737c5a5d4b39bfde4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.321407 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46118cb340244cc019714cbd9e95c064aa047e8ce68cbb3a667a52b402ae00bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:36:03Z\\\",\\\"message\\\":\\\"2026-02-28T04:35:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d66d5acd-f1bc-4d07-b21a-ef57cd9768dc\\\\n2026-02-28T04:35:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d66d5acd-f1bc-4d07-b21a-ef57cd9768dc to /host/opt/cni/bin/\\\\n2026-02-28T04:35:18Z [verbose] multus-daemon started\\\\n2026-02-28T04:35:18Z [verbose] Readiness Indicator file check\\\\n2026-02-28T04:36:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.351832 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:36:20Z\\\",\\\"message\\\":\\\"e reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 04:36:20.180943 7572 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 04:36:20.181069 7572 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0228 04:36:20.179119 7572 services_controller.go:356] Processing sync for service openshift-marketplace/community-operators for network=default\\\\nF0228 04:36:20.181173 7572 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:36:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-62hnq_openshift-ovn-kubernetes(faa5db1f-df50-492a-9d45-d5065bdc63d2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.373632 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93c2fbd-22ea-4935-9d13-0cff87209a82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed1d996cd1300f3bb16b61b994ef4f54dc63d091344c9f5ba0352c9c0770e8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd97c6cb9aa1bad68bc5df66df1561ea3b9e38dabe1aa17c100f913e6a3e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.395140 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.413464 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.434496 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2078c764-fc6e-49ec-a14b-c3ec7a2d5d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849ae83a4bc8172d4bc0c361d8f565dcf9d1d71e833d4d875481bbe8b7eca349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a4e3ad977a015ef15dad7b23433f35d16245c4d2d38b6008a20070f72a20e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0543da82ec1087bd70c14cbb530ed3ee36e372c2d8180bff84cb16d5c4d0d0eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.450153 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.465637 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.480210 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.496345 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.510550 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.531456 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.549284 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396380eb-5c77-43a8-9a21-814c3e8888f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714067117f5f948a99fcf525805dbc84659d7486a7d90a54aa54a9b924c7cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7968cb4f05079288706430331f9a9b96767af7ae0cafa8f46bf17c437a39275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 04:33:34.285189 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 04:33:34.288153 1 observer_polling.go:159] Starting file observer\\\\nI0228 04:33:34.331271 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 04:33:34.337611 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 04:34:02.261141 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 04:34:02.261276 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d814b990a8048b944b6ca2b8a1aa5b585368ce3a5d89b7b0993e92a291fa9fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91526622180b27c5558c8064508b1328f9196ac7ffd6f9c5f86bc616d9b6248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2127cf622ac7487c457107170b279eee2bd4abf6ce87378e3b8a17423a25c812\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.561602 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2258094-df28-401d-aa20-0931bedcb66b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.574195 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.588029 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.600017 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.616699 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.640519 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:36:20Z\\\",\\\"message\\\":\\\"e reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 04:36:20.180943 7572 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 04:36:20.181069 7572 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0228 04:36:20.179119 7572 services_controller.go:356] Processing sync for service openshift-marketplace/community-operators for network=default\\\\nF0228 04:36:20.181173 7572 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:36:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-62hnq_openshift-ovn-kubernetes(faa5db1f-df50-492a-9d45-d5065bdc63d2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.657086 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93c2fbd-22ea-4935-9d13-0cff87209a82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed1d996cd1300f3bb16b61b994ef4f54dc63d091344c9f5ba0352c9c0770e8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd97c6cb9aa1bad68bc5df66df1561ea3b9e38dabe1aa17c100f913e6a3e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.675223 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719151f8073cf997e9ca1087dcbb47e8f83f0e25a700251737c5a5d4b39bfde4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:22 crc kubenswrapper[5014]: I0228 04:36:22.695710 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46118cb340244cc019714cbd9e95c064aa047e8ce68cbb3a667a52b402ae00bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:36:03Z\\\",\\\"message\\\":\\\"2026-02-28T04:35:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d66d5acd-f1bc-4d07-b21a-ef57cd9768dc\\\\n2026-02-28T04:35:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d66d5acd-f1bc-4d07-b21a-ef57cd9768dc to /host/opt/cni/bin/\\\\n2026-02-28T04:35:18Z [verbose] multus-daemon started\\\\n2026-02-28T04:35:18Z [verbose] Readiness Indicator file check\\\\n2026-02-28T04:36:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:22Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.171115 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:24 crc kubenswrapper[5014]: E0228 04:36:24.171319 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.171884 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.171903 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:24 crc kubenswrapper[5014]: E0228 04:36:24.171994 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.171913 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:24 crc kubenswrapper[5014]: E0228 04:36:24.172188 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:24 crc kubenswrapper[5014]: E0228 04:36:24.173346 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.496187 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.496279 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.496301 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.496335 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.496361 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:36:24Z","lastTransitionTime":"2026-02-28T04:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:36:24 crc kubenswrapper[5014]: E0228 04:36:24.515061 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.519956 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.520003 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.520016 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.520035 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.520046 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:36:24Z","lastTransitionTime":"2026-02-28T04:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:36:24 crc kubenswrapper[5014]: E0228 04:36:24.534750 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.539853 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.539907 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.539920 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.539943 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.539957 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:36:24Z","lastTransitionTime":"2026-02-28T04:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:36:24 crc kubenswrapper[5014]: E0228 04:36:24.556655 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.560844 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.560877 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.560890 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.560907 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.560918 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:36:24Z","lastTransitionTime":"2026-02-28T04:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:36:24 crc kubenswrapper[5014]: E0228 04:36:24.574280 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.579051 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.579089 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.579100 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.579117 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:36:24 crc kubenswrapper[5014]: I0228 04:36:24.579126 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:36:24Z","lastTransitionTime":"2026-02-28T04:36:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:36:24 crc kubenswrapper[5014]: E0228 04:36:24.592659 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:24Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:24 crc kubenswrapper[5014]: E0228 04:36:24.592835 5014 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 04:36:26 crc kubenswrapper[5014]: I0228 04:36:26.171121 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:26 crc kubenswrapper[5014]: I0228 04:36:26.171319 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:26 crc kubenswrapper[5014]: E0228 04:36:26.171369 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:26 crc kubenswrapper[5014]: I0228 04:36:26.171420 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:26 crc kubenswrapper[5014]: I0228 04:36:26.171439 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:26 crc kubenswrapper[5014]: E0228 04:36:26.171713 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:26 crc kubenswrapper[5014]: E0228 04:36:26.171852 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:26 crc kubenswrapper[5014]: E0228 04:36:26.172033 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:27 crc kubenswrapper[5014]: E0228 04:36:27.282205 5014 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 04:36:28 crc kubenswrapper[5014]: I0228 04:36:28.170944 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:28 crc kubenswrapper[5014]: I0228 04:36:28.170944 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:28 crc kubenswrapper[5014]: E0228 04:36:28.171177 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:28 crc kubenswrapper[5014]: E0228 04:36:28.171301 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:28 crc kubenswrapper[5014]: I0228 04:36:28.171713 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:28 crc kubenswrapper[5014]: I0228 04:36:28.171875 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:28 crc kubenswrapper[5014]: E0228 04:36:28.172033 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:28 crc kubenswrapper[5014]: E0228 04:36:28.172219 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:30 crc kubenswrapper[5014]: I0228 04:36:30.171517 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:30 crc kubenswrapper[5014]: I0228 04:36:30.171668 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:30 crc kubenswrapper[5014]: I0228 04:36:30.171568 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:30 crc kubenswrapper[5014]: E0228 04:36:30.171939 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:30 crc kubenswrapper[5014]: E0228 04:36:30.171706 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:30 crc kubenswrapper[5014]: I0228 04:36:30.171532 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:30 crc kubenswrapper[5014]: E0228 04:36:30.172091 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:30 crc kubenswrapper[5014]: E0228 04:36:30.172122 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:32 crc kubenswrapper[5014]: I0228 04:36:32.171324 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:32 crc kubenswrapper[5014]: E0228 04:36:32.171491 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:32 crc kubenswrapper[5014]: I0228 04:36:32.171721 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:32 crc kubenswrapper[5014]: E0228 04:36:32.171797 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:32 crc kubenswrapper[5014]: I0228 04:36:32.171963 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:32 crc kubenswrapper[5014]: E0228 04:36:32.172454 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:32 crc kubenswrapper[5014]: I0228 04:36:32.172600 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:32 crc kubenswrapper[5014]: E0228 04:36:32.172783 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:32 crc kubenswrapper[5014]: I0228 04:36:32.188865 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2b29aed6-db00-4c95-831f-f3230a6edd2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://719151f8073cf997e9ca1087dcbb47e8f83f0e25a700251737c5a5d4b39bfde4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:48Z\\\",\\\"message\\\":\\\"file observer\\\\nW0228 04:34:47.750163 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0228 04:34:47.750293 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0228 04:34:47.750896 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-146953533/tls.crt::/tmp/serving-cert-146953533/tls.key\\\\\\\"\\\\nI0228 04:34:47.985566 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0228 04:34:47.992445 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0228 04:34:47.992484 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0228 04:34:47.992528 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0228 04:34:47.992539 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0228 04:34:48.001724 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI0228 04:34:48.001754 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0228 04:34:48.001853 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001881 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0228 04:34:48.001896 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0228 04:34:48.001909 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0228 04:34:48.001921 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0228 04:34:48.001930 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0228 04:34:48.003421 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:34:47Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:32 crc kubenswrapper[5014]: I0228 04:36:32.203241 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-8xzmq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08c35a73-dfa6-4097-beb4-3a6d4f419559\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46118cb340244cc019714cbd9e95c064aa047e8ce68cbb3a667a52b402ae00bb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:36:03Z\\\",\\\"message\\\":\\\"2026-02-28T04:35:18+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_d66d5acd-f1bc-4d07-b21a-ef57cd9768dc\\\\n2026-02-28T04:35:18+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_d66d5acd-f1bc-4d07-b21a-ef57cd9768dc to /host/opt/cni/bin/\\\\n2026-02-28T04:35:18Z [verbose] multus-daemon started\\\\n2026-02-28T04:35:18Z [verbose] Readiness Indicator file check\\\\n2026-02-28T04:36:03Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:36:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qphnm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-8xzmq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:32 crc kubenswrapper[5014]: I0228 04:36:32.225551 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"faa5db1f-df50-492a-9d45-d5065bdc63d2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-28T04:36:20Z\\\",\\\"message\\\":\\\"e reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 04:36:20.180943 7572 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-config-operator/metrics]} name:Service_openshift-config-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.161:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {f32857b5-f652-4313-a0d7-455c3156dd99}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0228 04:36:20.181069 7572 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nI0228 04:36:20.179119 7572 services_controller.go:356] Processing sync for service openshift-marketplace/community-operators for network=default\\\\nF0228 04:36:20.181173 7572 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:36:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-62hnq_openshift-ovn-kubernetes(faa5db1f-df50-492a-9d45-d5065bdc63d2)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vp7g9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-62hnq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:32 crc kubenswrapper[5014]: I0228 04:36:32.237345 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a93c2fbd-22ea-4935-9d13-0cff87209a82\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed1d996cd1300f3bb16b61b994ef4f54dc63d091344c9f5ba0352c9c0770e8fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9dd97c6cb9aa1bad68bc5df66df1561ea3b9e38dabe1aa17c100f913e6a3e0aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5s5bv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-lgr94\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:32 crc kubenswrapper[5014]: I0228 04:36:32.248324 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8709dcb48a2e6302ad67942cb60afcb91050a57a792ea2439f62a08dda2f396\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:32 crc kubenswrapper[5014]: I0228 04:36:32.257140 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mpjds" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ee39668-c5e4-4da8-807d-a63d9591161c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://572758a432519d3a889b9cdbc2f8f8b8492a5479564d2f341e736334e2b62689\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-4b79t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mpjds\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:32 crc kubenswrapper[5014]: I0228 04:36:32.270032 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:32 crc kubenswrapper[5014]: I0228 04:36:32.281739 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6aad0009-d904-48f8-8e30-82205907ece1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f40dbe122a22253898d35b10f0744955b906a164a96294c156ff7a092ca1a60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-khzjr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-cct62\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:32 crc kubenswrapper[5014]: E0228 04:36:32.283102 5014 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 04:36:32 crc kubenswrapper[5014]: I0228 04:36:32.295949 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fac15347-d258-4af3-85ab-04ee49634e0a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ad3ec48e4ea5a2629572974dd2869f20e3c380d53e4ea670a60eb00bd3420fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ab5707a917ef3eeee46fa1942d8cd9a1154b90b289e8b9608c08d12a11e006f9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:16Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6eaa7f3c06dbf52d3457c8350127d0c8151d2ffae1a17b0d5939b76009ddde10\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9e0314483b713173cc24b1939aa8e309167bdcdb8036be9ec4dcfa081ec278d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://030ca6646568252c3d32fe4a4ad9173aa5dd6dd06eb2227dcba7a513dbbefaca\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:19Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3440d2f196806c146c44d4f75c3a9beaf102caf0be091b68f1773d381bd2a042\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663bfb0b7e9e27f801f28938922ed4dcbc5328bb30cee038bfe19b96b6867c34\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:35:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c297n\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-lt2wh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:32 crc kubenswrapper[5014]: I0228 04:36:32.307279 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-kqnsx" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"71e74c76-dc4a-4ab9-a25a-0e925a384492\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2c8528e1edba66f9142fd7b6205f054c15193d6759027c894d8e912f746f642c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p4nws\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-kqnsx\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:32 crc kubenswrapper[5014]: I0228 04:36:32.326070 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74ef50d1-792b-4728-a784-8cc6e275b607\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4f22971faee9d7946ffe4d0386f838b479d86fb19eb0b08f4b7fa16a18642b22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://87830bff74238045f537e6321e02dba9caca86e3f23ea374c93da9dc30e27913\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f69b54958e072ce3cc751cd57f0c218d27087200a93509ef5a731754800adc3a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5852c343df8ac42a38078d4952687882605ec2251a4ef0820e1fca2ffcc3c78f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://205ab92de925a49872757688d4b12d2fc84aa98468d5e2228785ffd43b10543d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://20d9454ec6dec7290ebabc2de20db614e2d8e744499a6715883ebdc9990ffbd3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://818f2efbcb0257c06818fb42c9140e57cd035db5022d73405f6d2874a3e23005\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4ab09ee96723cc8bdfb7d0e6d414fc9d996038e8cbe77d3c3a8c28621c6f034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:32 crc kubenswrapper[5014]: I0228 04:36:32.338215 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"396380eb-5c77-43a8-9a21-814c3e8888f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://714067117f5f948a99fcf525805dbc84659d7486a7d90a54aa54a9b924c7cbd7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7968cb4f05079288706430331f9a9b96767af7ae0cafa8f46bf17c437a39275c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-28T04:34:02Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0228 04:33:34.285189 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0228 04:33:34.288153 1 observer_polling.go:159] Starting file observer\\\\nI0228 04:33:34.331271 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0228 04:33:34.337611 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0228 04:34:02.261141 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0228 04:34:02.261276 1 cmd.go:179] failed checking apiserver connectivity: Unauthorized\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:34:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d814b990a8048b944b6ca2b8a1aa5b585368ce3a5d89b7b0993e92a291fa9fa9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://91526622180b27c5558c8064508b1328f9196ac7ffd6f9c5f86bc616d9b6248e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2127cf622ac7487c457107170b279eee2bd4abf6ce87378e3b8a17423a25c812\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:32 crc kubenswrapper[5014]: I0228 04:36:32.349201 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2078c764-fc6e-49ec-a14b-c3ec7a2d5d4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:34:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://849ae83a4bc8172d4bc0c361d8f565dcf9d1d71e833d4d875481bbe8b7eca349\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://17a4e3ad977a015ef15dad7b23433f35d16245c4d2d38b6008a20070f72a20e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0543da82ec1087bd70c14cbb530ed3ee36e372c2d8180bff84cb16d5c4d0d0eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8affb00e58f77f0887ac9b694df7c6443d27a05af7e2b82ad913287c90e6fd32\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:32 crc kubenswrapper[5014]: I0228 04:36:32.359297 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d28bac1c-d0db-4888-b8e4-2a67c5c52bb1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:33:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://834f3165d83f495af4999d0d2571ee8487e9d418ac6a94d3cc7a5823a3c60cab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:33:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e1654f8336023f297927799ca0af6701413f56ece05dd8faa281781110e54edf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-28T04:33:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-28T04:33:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:33:32Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:32 crc kubenswrapper[5014]: I0228 04:36:32.368544 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-rqllg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a2258094-df28-401d-aa20-0931bedcb66b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6pvqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-28T04:35:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-rqllg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:32 crc kubenswrapper[5014]: I0228 04:36:32.380378 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:32 crc kubenswrapper[5014]: I0228 04:36:32.393627 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a34b6ee071d2e3fa127cc0c9a1033e04c35415029f7b8cd8babfa9ed00607b37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:32 crc kubenswrapper[5014]: I0228 04:36:32.405291 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:32 crc kubenswrapper[5014]: I0228 04:36:32.417378 5014 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:00Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-28T04:35:15Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://de9c8761d77fddcc9d3e49c1a29b6eef3c27164dce1b5bbb257b312d25c0e322\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c62e930c95570df3971e647d27b7ef3730b483a4ab2b4bc9a8d646fd675a4e12\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-28T04:35:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:32Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:32 crc kubenswrapper[5014]: I0228 04:36:32.498297 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs\") pod \"network-metrics-daemon-rqllg\" (UID: \"a2258094-df28-401d-aa20-0931bedcb66b\") " pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:32 crc kubenswrapper[5014]: E0228 04:36:32.498431 5014 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 04:36:32 crc kubenswrapper[5014]: E0228 04:36:32.498507 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs podName:a2258094-df28-401d-aa20-0931bedcb66b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:36.498487151 +0000 UTC m=+245.168613061 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs") pod "network-metrics-daemon-rqllg" (UID: "a2258094-df28-401d-aa20-0931bedcb66b") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.171423 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.171465 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:34 crc kubenswrapper[5014]: E0228 04:36:34.171561 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.171598 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.171678 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:34 crc kubenswrapper[5014]: E0228 04:36:34.172007 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:34 crc kubenswrapper[5014]: E0228 04:36:34.172127 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:34 crc kubenswrapper[5014]: E0228 04:36:34.172171 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.172331 5014 scope.go:117] "RemoveContainer" containerID="f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328" Feb 28 04:36:34 crc kubenswrapper[5014]: E0228 04:36:34.172475 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-62hnq_openshift-ovn-kubernetes(faa5db1f-df50-492a-9d45-d5065bdc63d2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.611657 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.611686 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.611693 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.611706 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.611714 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:36:34Z","lastTransitionTime":"2026-02-28T04:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:36:34 crc kubenswrapper[5014]: E0228 04:36:34.623414 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:34Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.627106 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.627250 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.627348 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.627444 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.627527 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:36:34Z","lastTransitionTime":"2026-02-28T04:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:36:34 crc kubenswrapper[5014]: E0228 04:36:34.639742 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:34Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.642780 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.642852 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.642866 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.642882 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.642908 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:36:34Z","lastTransitionTime":"2026-02-28T04:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:36:34 crc kubenswrapper[5014]: E0228 04:36:34.656532 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:34Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.660061 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.660297 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.660373 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.660471 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.660561 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:36:34Z","lastTransitionTime":"2026-02-28T04:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:36:34 crc kubenswrapper[5014]: E0228 04:36:34.672238 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:34Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.698289 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.698360 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.698377 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.698404 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:36:34 crc kubenswrapper[5014]: I0228 04:36:34.698424 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:36:34Z","lastTransitionTime":"2026-02-28T04:36:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:36:34 crc kubenswrapper[5014]: E0228 04:36:34.719908 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-28T04:36:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"400c935d-cede-4f46-a04e-2bdcfad90852\\\",\\\"systemUUID\\\":\\\"ed4d7eba-154f-4bc0-9847-938dd12ba271\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-28T04:36:34Z is after 2025-08-24T17:21:41Z" Feb 28 04:36:34 crc kubenswrapper[5014]: E0228 04:36:34.720081 5014 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 04:36:36 crc kubenswrapper[5014]: I0228 04:36:36.171567 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:36 crc kubenswrapper[5014]: I0228 04:36:36.171573 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:36 crc kubenswrapper[5014]: I0228 04:36:36.171645 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:36 crc kubenswrapper[5014]: I0228 04:36:36.171723 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:36 crc kubenswrapper[5014]: E0228 04:36:36.171859 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:36 crc kubenswrapper[5014]: E0228 04:36:36.172203 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:36 crc kubenswrapper[5014]: E0228 04:36:36.172302 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:36 crc kubenswrapper[5014]: E0228 04:36:36.172434 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:37 crc kubenswrapper[5014]: E0228 04:36:37.284190 5014 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 04:36:38 crc kubenswrapper[5014]: I0228 04:36:38.171347 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:38 crc kubenswrapper[5014]: I0228 04:36:38.171401 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:38 crc kubenswrapper[5014]: I0228 04:36:38.171363 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:38 crc kubenswrapper[5014]: E0228 04:36:38.171525 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:38 crc kubenswrapper[5014]: E0228 04:36:38.171575 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:38 crc kubenswrapper[5014]: E0228 04:36:38.171633 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:38 crc kubenswrapper[5014]: I0228 04:36:38.171989 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:38 crc kubenswrapper[5014]: E0228 04:36:38.172070 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:40 crc kubenswrapper[5014]: I0228 04:36:40.171179 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:40 crc kubenswrapper[5014]: I0228 04:36:40.171315 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:40 crc kubenswrapper[5014]: E0228 04:36:40.171453 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:40 crc kubenswrapper[5014]: E0228 04:36:40.171677 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:40 crc kubenswrapper[5014]: I0228 04:36:40.171891 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:40 crc kubenswrapper[5014]: I0228 04:36:40.171914 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:40 crc kubenswrapper[5014]: E0228 04:36:40.172167 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:40 crc kubenswrapper[5014]: E0228 04:36:40.172367 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:42 crc kubenswrapper[5014]: I0228 04:36:42.171099 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:42 crc kubenswrapper[5014]: I0228 04:36:42.171188 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:42 crc kubenswrapper[5014]: I0228 04:36:42.171188 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:42 crc kubenswrapper[5014]: E0228 04:36:42.171341 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:42 crc kubenswrapper[5014]: I0228 04:36:42.171388 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:42 crc kubenswrapper[5014]: E0228 04:36:42.171509 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:42 crc kubenswrapper[5014]: E0228 04:36:42.171581 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:42 crc kubenswrapper[5014]: E0228 04:36:42.171736 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:42 crc kubenswrapper[5014]: E0228 04:36:42.285694 5014 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 04:36:42 crc kubenswrapper[5014]: I0228 04:36:42.290919 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=99.290893441 podStartE2EDuration="1m39.290893441s" podCreationTimestamp="2026-02-28 04:35:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:36:42.270116341 +0000 UTC m=+190.940242251" watchObservedRunningTime="2026-02-28 04:36:42.290893441 +0000 UTC m=+190.961019351" Feb 28 04:36:42 crc kubenswrapper[5014]: I0228 04:36:42.317420 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-8xzmq" podStartSLOduration=141.317394399 podStartE2EDuration="2m21.317394399s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:36:42.294456849 +0000 UTC m=+190.964582759" watchObservedRunningTime="2026-02-28 04:36:42.317394399 +0000 UTC m=+190.987520309" Feb 28 04:36:42 crc kubenswrapper[5014]: I0228 04:36:42.355040 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-lgr94" podStartSLOduration=140.355010211 podStartE2EDuration="2m20.355010211s" podCreationTimestamp="2026-02-28 04:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:36:42.333994425 +0000 UTC m=+191.004120355" watchObservedRunningTime="2026-02-28 04:36:42.355010211 +0000 UTC m=+191.025136131" Feb 28 04:36:42 crc kubenswrapper[5014]: I0228 04:36:42.369537 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-mpjds" podStartSLOduration=141.369509069 podStartE2EDuration="2m21.369509069s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:36:42.367468374 +0000 UTC m=+191.037594284" watchObservedRunningTime="2026-02-28 04:36:42.369509069 +0000 UTC m=+191.039634969" Feb 28 04:36:42 crc kubenswrapper[5014]: I0228 04:36:42.386883 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-lt2wh" podStartSLOduration=141.386860676 podStartE2EDuration="2m21.386860676s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:36:42.386094024 +0000 UTC m=+191.056219934" watchObservedRunningTime="2026-02-28 04:36:42.386860676 +0000 UTC m=+191.056986586" Feb 28 04:36:42 crc kubenswrapper[5014]: I0228 04:36:42.435427 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-kqnsx" podStartSLOduration=141.435399848 podStartE2EDuration="2m21.435399848s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:36:42.403210874 +0000 UTC m=+191.073336784" watchObservedRunningTime="2026-02-28 04:36:42.435399848 +0000 UTC m=+191.105525758" Feb 28 04:36:42 crc kubenswrapper[5014]: I0228 04:36:42.436161 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=94.436154789 podStartE2EDuration="1m34.436154789s" podCreationTimestamp="2026-02-28 04:35:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:36:42.431718558 +0000 UTC m=+191.101844468" watchObservedRunningTime="2026-02-28 04:36:42.436154789 +0000 UTC m=+191.106280699" Feb 28 04:36:42 crc kubenswrapper[5014]: I0228 04:36:42.447753 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=59.447724956 podStartE2EDuration="59.447724956s" podCreationTimestamp="2026-02-28 04:35:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:36:42.447065228 +0000 UTC m=+191.117191148" watchObservedRunningTime="2026-02-28 04:36:42.447724956 +0000 UTC m=+191.117850866" Feb 28 04:36:42 crc kubenswrapper[5014]: I0228 04:36:42.473927 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=70.473901886 podStartE2EDuration="1m10.473901886s" podCreationTimestamp="2026-02-28 04:35:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:36:42.462467572 +0000 UTC m=+191.132593512" watchObservedRunningTime="2026-02-28 04:36:42.473901886 +0000 UTC m=+191.144027796" Feb 28 04:36:42 crc kubenswrapper[5014]: I0228 04:36:42.474841 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=101.474835211 podStartE2EDuration="1m41.474835211s" podCreationTimestamp="2026-02-28 04:35:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:36:42.473729381 +0000 UTC m=+191.143855301" watchObservedRunningTime="2026-02-28 04:36:42.474835211 +0000 UTC m=+191.144961121" Feb 28 04:36:42 crc kubenswrapper[5014]: I0228 04:36:42.504030 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podStartSLOduration=141.504001751 podStartE2EDuration="2m21.504001751s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:36:42.503692233 +0000 UTC m=+191.173818143" watchObservedRunningTime="2026-02-28 04:36:42.504001751 +0000 UTC m=+191.174127661" Feb 28 04:36:44 crc kubenswrapper[5014]: I0228 04:36:44.171069 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:44 crc kubenswrapper[5014]: I0228 04:36:44.171166 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:44 crc kubenswrapper[5014]: E0228 04:36:44.171323 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:44 crc kubenswrapper[5014]: I0228 04:36:44.171363 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:44 crc kubenswrapper[5014]: E0228 04:36:44.171508 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:44 crc kubenswrapper[5014]: E0228 04:36:44.171612 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:44 crc kubenswrapper[5014]: I0228 04:36:44.171662 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:44 crc kubenswrapper[5014]: E0228 04:36:44.171737 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:44 crc kubenswrapper[5014]: I0228 04:36:44.992616 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 28 04:36:44 crc kubenswrapper[5014]: I0228 04:36:44.992668 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 28 04:36:44 crc kubenswrapper[5014]: I0228 04:36:44.992680 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 28 04:36:44 crc kubenswrapper[5014]: I0228 04:36:44.992700 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 28 04:36:44 crc kubenswrapper[5014]: I0228 04:36:44.992715 5014 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-28T04:36:44Z","lastTransitionTime":"2026-02-28T04:36:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 28 04:36:45 crc kubenswrapper[5014]: I0228 04:36:45.047767 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-zbdgw"] Feb 28 04:36:45 crc kubenswrapper[5014]: I0228 04:36:45.048364 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zbdgw" Feb 28 04:36:45 crc kubenswrapper[5014]: I0228 04:36:45.053189 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 28 04:36:45 crc kubenswrapper[5014]: I0228 04:36:45.053617 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 28 04:36:45 crc kubenswrapper[5014]: I0228 04:36:45.053633 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 28 04:36:45 crc kubenswrapper[5014]: I0228 04:36:45.053987 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 28 04:36:45 crc kubenswrapper[5014]: I0228 04:36:45.080497 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85bd10f4-9c6f-4f38-b2f8-94a05cb84d22-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-zbdgw\" (UID: \"85bd10f4-9c6f-4f38-b2f8-94a05cb84d22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zbdgw" Feb 28 04:36:45 crc kubenswrapper[5014]: I0228 04:36:45.080605 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85bd10f4-9c6f-4f38-b2f8-94a05cb84d22-service-ca\") pod \"cluster-version-operator-5c965bbfc6-zbdgw\" (UID: \"85bd10f4-9c6f-4f38-b2f8-94a05cb84d22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zbdgw" Feb 28 04:36:45 crc kubenswrapper[5014]: I0228 04:36:45.080720 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85bd10f4-9c6f-4f38-b2f8-94a05cb84d22-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-zbdgw\" (UID: \"85bd10f4-9c6f-4f38-b2f8-94a05cb84d22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zbdgw" Feb 28 04:36:45 crc kubenswrapper[5014]: I0228 04:36:45.080788 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85bd10f4-9c6f-4f38-b2f8-94a05cb84d22-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-zbdgw\" (UID: \"85bd10f4-9c6f-4f38-b2f8-94a05cb84d22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zbdgw" Feb 28 04:36:45 crc kubenswrapper[5014]: I0228 04:36:45.080891 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85bd10f4-9c6f-4f38-b2f8-94a05cb84d22-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-zbdgw\" (UID: \"85bd10f4-9c6f-4f38-b2f8-94a05cb84d22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zbdgw" Feb 28 04:36:45 crc kubenswrapper[5014]: I0228 04:36:45.182609 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85bd10f4-9c6f-4f38-b2f8-94a05cb84d22-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-zbdgw\" (UID: \"85bd10f4-9c6f-4f38-b2f8-94a05cb84d22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zbdgw" Feb 28 04:36:45 crc kubenswrapper[5014]: I0228 04:36:45.182687 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85bd10f4-9c6f-4f38-b2f8-94a05cb84d22-service-ca\") pod \"cluster-version-operator-5c965bbfc6-zbdgw\" (UID: \"85bd10f4-9c6f-4f38-b2f8-94a05cb84d22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zbdgw" Feb 28 04:36:45 crc kubenswrapper[5014]: I0228 04:36:45.182743 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85bd10f4-9c6f-4f38-b2f8-94a05cb84d22-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-zbdgw\" (UID: \"85bd10f4-9c6f-4f38-b2f8-94a05cb84d22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zbdgw" Feb 28 04:36:45 crc kubenswrapper[5014]: I0228 04:36:45.182837 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85bd10f4-9c6f-4f38-b2f8-94a05cb84d22-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-zbdgw\" (UID: \"85bd10f4-9c6f-4f38-b2f8-94a05cb84d22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zbdgw" Feb 28 04:36:45 crc kubenswrapper[5014]: I0228 04:36:45.182889 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85bd10f4-9c6f-4f38-b2f8-94a05cb84d22-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-zbdgw\" (UID: \"85bd10f4-9c6f-4f38-b2f8-94a05cb84d22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zbdgw" Feb 28 04:36:45 crc kubenswrapper[5014]: I0228 04:36:45.183086 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/85bd10f4-9c6f-4f38-b2f8-94a05cb84d22-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-zbdgw\" (UID: \"85bd10f4-9c6f-4f38-b2f8-94a05cb84d22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zbdgw" Feb 28 04:36:45 crc kubenswrapper[5014]: I0228 04:36:45.183987 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/85bd10f4-9c6f-4f38-b2f8-94a05cb84d22-service-ca\") pod \"cluster-version-operator-5c965bbfc6-zbdgw\" (UID: \"85bd10f4-9c6f-4f38-b2f8-94a05cb84d22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zbdgw" Feb 28 04:36:45 crc kubenswrapper[5014]: I0228 04:36:45.184063 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/85bd10f4-9c6f-4f38-b2f8-94a05cb84d22-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-zbdgw\" (UID: \"85bd10f4-9c6f-4f38-b2f8-94a05cb84d22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zbdgw" Feb 28 04:36:45 crc kubenswrapper[5014]: I0228 04:36:45.189962 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/85bd10f4-9c6f-4f38-b2f8-94a05cb84d22-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-zbdgw\" (UID: \"85bd10f4-9c6f-4f38-b2f8-94a05cb84d22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zbdgw" Feb 28 04:36:45 crc kubenswrapper[5014]: I0228 04:36:45.207351 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85bd10f4-9c6f-4f38-b2f8-94a05cb84d22-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-zbdgw\" (UID: \"85bd10f4-9c6f-4f38-b2f8-94a05cb84d22\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zbdgw" Feb 28 04:36:45 crc kubenswrapper[5014]: I0228 04:36:45.372461 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zbdgw" Feb 28 04:36:45 crc kubenswrapper[5014]: I0228 04:36:45.729881 5014 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 28 04:36:45 crc kubenswrapper[5014]: I0228 04:36:45.741839 5014 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 28 04:36:46 crc kubenswrapper[5014]: I0228 04:36:46.093009 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zbdgw" event={"ID":"85bd10f4-9c6f-4f38-b2f8-94a05cb84d22","Type":"ContainerStarted","Data":"990225b8a8ae95af83f45c33e0c3c8011850126d5bbc906af2d6a44c619cea1e"} Feb 28 04:36:46 crc kubenswrapper[5014]: I0228 04:36:46.093141 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zbdgw" event={"ID":"85bd10f4-9c6f-4f38-b2f8-94a05cb84d22","Type":"ContainerStarted","Data":"54d9749f08a4c5a7b456dcf80ded968c184b739e12f89686b229bbbd95320668"} Feb 28 04:36:46 crc kubenswrapper[5014]: I0228 04:36:46.114312 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-zbdgw" podStartSLOduration=145.114288598 podStartE2EDuration="2m25.114288598s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:36:46.113798225 +0000 UTC m=+194.783924135" watchObservedRunningTime="2026-02-28 04:36:46.114288598 +0000 UTC m=+194.784414508" Feb 28 04:36:46 crc kubenswrapper[5014]: I0228 04:36:46.171484 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:46 crc kubenswrapper[5014]: I0228 04:36:46.171484 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:46 crc kubenswrapper[5014]: E0228 04:36:46.171706 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:46 crc kubenswrapper[5014]: I0228 04:36:46.171517 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:46 crc kubenswrapper[5014]: I0228 04:36:46.171484 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:46 crc kubenswrapper[5014]: E0228 04:36:46.171858 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:46 crc kubenswrapper[5014]: E0228 04:36:46.177763 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:46 crc kubenswrapper[5014]: E0228 04:36:46.177947 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:47 crc kubenswrapper[5014]: E0228 04:36:47.286830 5014 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 04:36:48 crc kubenswrapper[5014]: I0228 04:36:48.170961 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:48 crc kubenswrapper[5014]: I0228 04:36:48.171146 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:48 crc kubenswrapper[5014]: I0228 04:36:48.171146 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:48 crc kubenswrapper[5014]: I0228 04:36:48.171374 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:48 crc kubenswrapper[5014]: E0228 04:36:48.171372 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:48 crc kubenswrapper[5014]: E0228 04:36:48.171549 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:48 crc kubenswrapper[5014]: E0228 04:36:48.171741 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:48 crc kubenswrapper[5014]: E0228 04:36:48.171944 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:49 crc kubenswrapper[5014]: I0228 04:36:49.173096 5014 scope.go:117] "RemoveContainer" containerID="f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328" Feb 28 04:36:49 crc kubenswrapper[5014]: E0228 04:36:49.173709 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-62hnq_openshift-ovn-kubernetes(faa5db1f-df50-492a-9d45-d5065bdc63d2)\"" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" Feb 28 04:36:50 crc kubenswrapper[5014]: I0228 04:36:50.112216 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8xzmq_08c35a73-dfa6-4097-beb4-3a6d4f419559/kube-multus/1.log" Feb 28 04:36:50 crc kubenswrapper[5014]: I0228 04:36:50.112904 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8xzmq_08c35a73-dfa6-4097-beb4-3a6d4f419559/kube-multus/0.log" Feb 28 04:36:50 crc kubenswrapper[5014]: I0228 04:36:50.112953 5014 generic.go:334] "Generic (PLEG): container finished" podID="08c35a73-dfa6-4097-beb4-3a6d4f419559" containerID="46118cb340244cc019714cbd9e95c064aa047e8ce68cbb3a667a52b402ae00bb" exitCode=1 Feb 28 04:36:50 crc kubenswrapper[5014]: I0228 04:36:50.113002 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8xzmq" event={"ID":"08c35a73-dfa6-4097-beb4-3a6d4f419559","Type":"ContainerDied","Data":"46118cb340244cc019714cbd9e95c064aa047e8ce68cbb3a667a52b402ae00bb"} Feb 28 04:36:50 crc kubenswrapper[5014]: I0228 04:36:50.113057 5014 scope.go:117] "RemoveContainer" containerID="591f6d402cfcba7104ead4f7b3e84cd0ed6f9ce6f2c9faf3019cda0ae1235f2c" Feb 28 04:36:50 crc kubenswrapper[5014]: I0228 04:36:50.114924 5014 scope.go:117] "RemoveContainer" containerID="46118cb340244cc019714cbd9e95c064aa047e8ce68cbb3a667a52b402ae00bb" Feb 28 04:36:50 crc kubenswrapper[5014]: E0228 04:36:50.115379 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-8xzmq_openshift-multus(08c35a73-dfa6-4097-beb4-3a6d4f419559)\"" pod="openshift-multus/multus-8xzmq" podUID="08c35a73-dfa6-4097-beb4-3a6d4f419559" Feb 28 04:36:50 crc kubenswrapper[5014]: I0228 04:36:50.172077 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:50 crc kubenswrapper[5014]: E0228 04:36:50.172238 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:50 crc kubenswrapper[5014]: I0228 04:36:50.172093 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:50 crc kubenswrapper[5014]: I0228 04:36:50.172293 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:50 crc kubenswrapper[5014]: E0228 04:36:50.172331 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:50 crc kubenswrapper[5014]: I0228 04:36:50.172093 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:50 crc kubenswrapper[5014]: E0228 04:36:50.172438 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:50 crc kubenswrapper[5014]: E0228 04:36:50.172541 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:51 crc kubenswrapper[5014]: I0228 04:36:51.119568 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8xzmq_08c35a73-dfa6-4097-beb4-3a6d4f419559/kube-multus/1.log" Feb 28 04:36:52 crc kubenswrapper[5014]: I0228 04:36:52.171485 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:52 crc kubenswrapper[5014]: I0228 04:36:52.171580 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:52 crc kubenswrapper[5014]: I0228 04:36:52.171573 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:52 crc kubenswrapper[5014]: I0228 04:36:52.171739 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:52 crc kubenswrapper[5014]: E0228 04:36:52.173989 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:52 crc kubenswrapper[5014]: E0228 04:36:52.174114 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:52 crc kubenswrapper[5014]: E0228 04:36:52.174278 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:52 crc kubenswrapper[5014]: E0228 04:36:52.174485 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:52 crc kubenswrapper[5014]: E0228 04:36:52.287801 5014 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 04:36:54 crc kubenswrapper[5014]: I0228 04:36:54.171610 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:54 crc kubenswrapper[5014]: I0228 04:36:54.171771 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:54 crc kubenswrapper[5014]: I0228 04:36:54.171745 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:54 crc kubenswrapper[5014]: I0228 04:36:54.171733 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:54 crc kubenswrapper[5014]: E0228 04:36:54.172204 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:54 crc kubenswrapper[5014]: E0228 04:36:54.172604 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:54 crc kubenswrapper[5014]: E0228 04:36:54.172449 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:54 crc kubenswrapper[5014]: E0228 04:36:54.172093 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:56 crc kubenswrapper[5014]: I0228 04:36:56.171139 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:56 crc kubenswrapper[5014]: I0228 04:36:56.171237 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:56 crc kubenswrapper[5014]: I0228 04:36:56.171312 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:56 crc kubenswrapper[5014]: I0228 04:36:56.171147 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:56 crc kubenswrapper[5014]: E0228 04:36:56.171712 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:56 crc kubenswrapper[5014]: E0228 04:36:56.171852 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:56 crc kubenswrapper[5014]: E0228 04:36:56.171388 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:56 crc kubenswrapper[5014]: E0228 04:36:56.171590 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:36:57 crc kubenswrapper[5014]: E0228 04:36:57.289393 5014 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 04:36:58 crc kubenswrapper[5014]: I0228 04:36:58.171149 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:36:58 crc kubenswrapper[5014]: I0228 04:36:58.171272 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:36:58 crc kubenswrapper[5014]: I0228 04:36:58.171187 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:36:58 crc kubenswrapper[5014]: E0228 04:36:58.171423 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:36:58 crc kubenswrapper[5014]: I0228 04:36:58.171474 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:36:58 crc kubenswrapper[5014]: E0228 04:36:58.171597 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:36:58 crc kubenswrapper[5014]: E0228 04:36:58.171723 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:36:58 crc kubenswrapper[5014]: E0228 04:36:58.171882 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:37:00 crc kubenswrapper[5014]: I0228 04:37:00.171418 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:37:00 crc kubenswrapper[5014]: I0228 04:37:00.171418 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:37:00 crc kubenswrapper[5014]: I0228 04:37:00.171462 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:37:00 crc kubenswrapper[5014]: E0228 04:37:00.171691 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:37:00 crc kubenswrapper[5014]: I0228 04:37:00.171888 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:37:00 crc kubenswrapper[5014]: E0228 04:37:00.171874 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:37:00 crc kubenswrapper[5014]: E0228 04:37:00.171990 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:37:00 crc kubenswrapper[5014]: E0228 04:37:00.172167 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:37:02 crc kubenswrapper[5014]: I0228 04:37:02.171009 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:37:02 crc kubenswrapper[5014]: I0228 04:37:02.171124 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:37:02 crc kubenswrapper[5014]: E0228 04:37:02.173109 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:37:02 crc kubenswrapper[5014]: I0228 04:37:02.173203 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:37:02 crc kubenswrapper[5014]: I0228 04:37:02.173250 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:37:02 crc kubenswrapper[5014]: E0228 04:37:02.173331 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:37:02 crc kubenswrapper[5014]: E0228 04:37:02.173418 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:37:02 crc kubenswrapper[5014]: E0228 04:37:02.174070 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:37:02 crc kubenswrapper[5014]: E0228 04:37:02.290104 5014 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 04:37:04 crc kubenswrapper[5014]: I0228 04:37:04.171018 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:37:04 crc kubenswrapper[5014]: I0228 04:37:04.171042 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:37:04 crc kubenswrapper[5014]: I0228 04:37:04.171079 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:37:04 crc kubenswrapper[5014]: E0228 04:37:04.171295 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:37:04 crc kubenswrapper[5014]: I0228 04:37:04.171398 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:37:04 crc kubenswrapper[5014]: E0228 04:37:04.171517 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:37:04 crc kubenswrapper[5014]: E0228 04:37:04.172572 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:37:04 crc kubenswrapper[5014]: E0228 04:37:04.172638 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:37:04 crc kubenswrapper[5014]: I0228 04:37:04.173011 5014 scope.go:117] "RemoveContainer" containerID="f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328" Feb 28 04:37:05 crc kubenswrapper[5014]: I0228 04:37:05.171645 5014 scope.go:117] "RemoveContainer" containerID="46118cb340244cc019714cbd9e95c064aa047e8ce68cbb3a667a52b402ae00bb" Feb 28 04:37:05 crc kubenswrapper[5014]: I0228 04:37:05.191232 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-62hnq_faa5db1f-df50-492a-9d45-d5065bdc63d2/ovnkube-controller/3.log" Feb 28 04:37:05 crc kubenswrapper[5014]: I0228 04:37:05.193905 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerStarted","Data":"b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b"} Feb 28 04:37:05 crc kubenswrapper[5014]: I0228 04:37:05.194684 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:37:05 crc kubenswrapper[5014]: I0228 04:37:05.272598 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" podStartSLOduration=164.272569075 podStartE2EDuration="2m44.272569075s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:05.269831487 +0000 UTC m=+213.939957397" watchObservedRunningTime="2026-02-28 04:37:05.272569075 +0000 UTC m=+213.942694985" Feb 28 04:37:05 crc kubenswrapper[5014]: I0228 04:37:05.324654 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rqllg"] Feb 28 04:37:05 crc kubenswrapper[5014]: I0228 04:37:05.324791 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:37:05 crc kubenswrapper[5014]: E0228 04:37:05.324915 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:37:06 crc kubenswrapper[5014]: I0228 04:37:06.170788 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:37:06 crc kubenswrapper[5014]: I0228 04:37:06.170831 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:37:06 crc kubenswrapper[5014]: E0228 04:37:06.171425 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:37:06 crc kubenswrapper[5014]: E0228 04:37:06.171557 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:37:06 crc kubenswrapper[5014]: I0228 04:37:06.170910 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:37:06 crc kubenswrapper[5014]: E0228 04:37:06.172101 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:37:06 crc kubenswrapper[5014]: I0228 04:37:06.200740 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8xzmq_08c35a73-dfa6-4097-beb4-3a6d4f419559/kube-multus/1.log" Feb 28 04:37:06 crc kubenswrapper[5014]: I0228 04:37:06.200877 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8xzmq" event={"ID":"08c35a73-dfa6-4097-beb4-3a6d4f419559","Type":"ContainerStarted","Data":"8ff78696065aad57b08b2613c61ae28962b3f9b9cd220106fba6bb3cf06b46a1"} Feb 28 04:37:07 crc kubenswrapper[5014]: I0228 04:37:07.170732 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:37:07 crc kubenswrapper[5014]: E0228 04:37:07.171257 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:37:07 crc kubenswrapper[5014]: E0228 04:37:07.292152 5014 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 28 04:37:08 crc kubenswrapper[5014]: I0228 04:37:08.078418 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:08 crc kubenswrapper[5014]: E0228 04:37:08.078691 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:39:10.07864679 +0000 UTC m=+338.748772760 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:08 crc kubenswrapper[5014]: I0228 04:37:08.078841 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:37:08 crc kubenswrapper[5014]: I0228 04:37:08.078947 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:37:08 crc kubenswrapper[5014]: E0228 04:37:08.079037 5014 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 04:37:08 crc kubenswrapper[5014]: E0228 04:37:08.079078 5014 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 04:37:08 crc kubenswrapper[5014]: E0228 04:37:08.079144 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 04:39:10.079118543 +0000 UTC m=+338.749244483 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 28 04:37:08 crc kubenswrapper[5014]: E0228 04:37:08.079183 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 04:39:10.079166095 +0000 UTC m=+338.749292045 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 28 04:37:08 crc kubenswrapper[5014]: I0228 04:37:08.171143 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:37:08 crc kubenswrapper[5014]: I0228 04:37:08.171228 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:37:08 crc kubenswrapper[5014]: E0228 04:37:08.171392 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:37:08 crc kubenswrapper[5014]: I0228 04:37:08.171163 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:37:08 crc kubenswrapper[5014]: E0228 04:37:08.171605 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:37:08 crc kubenswrapper[5014]: E0228 04:37:08.171735 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:37:08 crc kubenswrapper[5014]: I0228 04:37:08.179736 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:37:08 crc kubenswrapper[5014]: I0228 04:37:08.179868 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:37:08 crc kubenswrapper[5014]: E0228 04:37:08.180021 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 04:37:08 crc kubenswrapper[5014]: E0228 04:37:08.180073 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 04:37:08 crc kubenswrapper[5014]: E0228 04:37:08.180028 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 28 04:37:08 crc kubenswrapper[5014]: E0228 04:37:08.180095 5014 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:37:08 crc kubenswrapper[5014]: E0228 04:37:08.180117 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 28 04:37:08 crc kubenswrapper[5014]: E0228 04:37:08.180135 5014 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:37:08 crc kubenswrapper[5014]: E0228 04:37:08.180176 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 04:39:10.180151522 +0000 UTC m=+338.850277462 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:37:08 crc kubenswrapper[5014]: E0228 04:37:08.180204 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 04:39:10.180192583 +0000 UTC m=+338.850318523 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 28 04:37:09 crc kubenswrapper[5014]: I0228 04:37:09.171171 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:37:09 crc kubenswrapper[5014]: E0228 04:37:09.171371 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:37:10 crc kubenswrapper[5014]: I0228 04:37:10.171484 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:37:10 crc kubenswrapper[5014]: I0228 04:37:10.171596 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:37:10 crc kubenswrapper[5014]: I0228 04:37:10.171718 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:37:10 crc kubenswrapper[5014]: E0228 04:37:10.171732 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:37:10 crc kubenswrapper[5014]: E0228 04:37:10.171902 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:37:10 crc kubenswrapper[5014]: E0228 04:37:10.172019 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:37:11 crc kubenswrapper[5014]: I0228 04:37:11.171200 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:37:11 crc kubenswrapper[5014]: E0228 04:37:11.171423 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-rqllg" podUID="a2258094-df28-401d-aa20-0931bedcb66b" Feb 28 04:37:12 crc kubenswrapper[5014]: I0228 04:37:12.171075 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:37:12 crc kubenswrapper[5014]: I0228 04:37:12.171114 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:37:12 crc kubenswrapper[5014]: E0228 04:37:12.173325 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:37:12 crc kubenswrapper[5014]: I0228 04:37:12.173419 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:37:12 crc kubenswrapper[5014]: E0228 04:37:12.173702 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:37:12 crc kubenswrapper[5014]: E0228 04:37:12.173778 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:37:13 crc kubenswrapper[5014]: I0228 04:37:13.171605 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:37:13 crc kubenswrapper[5014]: I0228 04:37:13.175402 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 28 04:37:13 crc kubenswrapper[5014]: I0228 04:37:13.177911 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 28 04:37:14 crc kubenswrapper[5014]: I0228 04:37:14.171270 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:37:14 crc kubenswrapper[5014]: I0228 04:37:14.171378 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:37:14 crc kubenswrapper[5014]: I0228 04:37:14.171567 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:37:14 crc kubenswrapper[5014]: I0228 04:37:14.174626 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 28 04:37:14 crc kubenswrapper[5014]: I0228 04:37:14.174856 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 28 04:37:14 crc kubenswrapper[5014]: I0228 04:37:14.176311 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 28 04:37:14 crc kubenswrapper[5014]: I0228 04:37:14.178509 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.614162 5014 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.664867 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-488hv"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.665840 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bpskb"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.666448 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bpskb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.667285 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.669703 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.670157 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.670840 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.671037 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.671163 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.671835 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.672889 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.673304 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.673390 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.673304 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.673920 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.673945 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.675031 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.675167 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.675297 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.680563 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.686720 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-serving-cert\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.686770 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6-images\") pod \"machine-api-operator-5694c8668f-bpskb\" (UID: \"c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bpskb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.686799 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bpskb\" (UID: \"c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bpskb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.686873 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6-config\") pod \"machine-api-operator-5694c8668f-bpskb\" (UID: \"c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bpskb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.686896 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-audit\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.686919 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.686944 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-node-pullsecrets\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.686986 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-encryption-config\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.687074 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-image-import-ca\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.687189 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv785\" (UniqueName: \"kubernetes.io/projected/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-kube-api-access-fv785\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.687257 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-audit-dir\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.687299 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-etcd-serving-ca\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.687318 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njqmz\" (UniqueName: \"kubernetes.io/projected/c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6-kube-api-access-njqmz\") pod \"machine-api-operator-5694c8668f-bpskb\" (UID: \"c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bpskb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.687339 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-etcd-client\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.687386 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-config\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.689210 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zlklt"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.689694 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zlklt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.690838 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.691164 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.691646 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.692597 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-wc7xs"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.693090 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wc7xs" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.693706 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kct58"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.694060 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kct58" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.696846 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.697012 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.701928 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.702198 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fkqnd"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.702601 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.702710 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.702900 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.703072 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.703081 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.724427 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.757175 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.757339 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mt6mh"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.758131 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mt6mh" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.758641 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-w4sdb"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.759359 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4sdb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.760617 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.760700 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-n8xpb"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.761334 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.761343 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.761410 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.761444 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.761476 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.761532 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.761568 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.761608 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.761667 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.761782 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.761835 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.761960 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.762131 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.762300 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.762627 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.762830 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s68g5"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.762930 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.763002 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.763167 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.763170 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.763699 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s68g5" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.763745 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.764793 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.764983 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.765980 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.767688 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cslg4"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.767730 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.768395 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2rth"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.768766 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2rth" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.768853 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-cslg4" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.768924 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.768978 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-cghpw"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.768929 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.769561 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-cghpw" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.774304 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjr6k"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.775003 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjr6k" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.776888 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z87qr"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.777495 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kckfv"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.777886 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537550-hbznb"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.778235 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-hbznb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.778262 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z87qr" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.778325 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kckfv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.786985 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-66gzd"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.787388 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wtnl5"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.787880 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wtnl5" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.787957 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-66gzd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.788010 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.788110 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.788168 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.788310 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.788334 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-console-oauth-config\") pod \"console-f9d7485db-n8xpb\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.788362 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87307a00-6574-43d3-b6d8-5b5ee80ce95a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rjr6k\" (UID: \"87307a00-6574-43d3-b6d8-5b5ee80ce95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjr6k" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.788386 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7509910c-9915-4f07-80a6-d0b1eccd9213-profile-collector-cert\") pod \"olm-operator-6b444d44fb-s68g5\" (UID: \"7509910c-9915-4f07-80a6-d0b1eccd9213\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s68g5" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.788404 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsfgw\" (UniqueName: \"kubernetes.io/projected/992977b5-9456-46f3-9534-01f21a293ed1-kube-api-access-lsfgw\") pod \"cluster-image-registry-operator-dc59b4c8b-g2rth\" (UID: \"992977b5-9456-46f3-9534-01f21a293ed1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2rth" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.788425 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-service-ca\") pod \"console-f9d7485db-n8xpb\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.788445 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/17f469fa-831e-4e38-8ace-55fc476a337c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-w4sdb\" (UID: \"17f469fa-831e-4e38-8ace-55fc476a337c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4sdb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.788499 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-etcd-serving-ca\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.788653 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.789281 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-etcd-serving-ca\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.789363 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c64w8\" (UniqueName: \"kubernetes.io/projected/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-kube-api-access-c64w8\") pod \"console-f9d7485db-n8xpb\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.789451 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njqmz\" (UniqueName: \"kubernetes.io/projected/c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6-kube-api-access-njqmz\") pod \"machine-api-operator-5694c8668f-bpskb\" (UID: \"c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bpskb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.789487 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbpjs\" (UniqueName: \"kubernetes.io/projected/e957ffda-f443-4f2b-9a8e-4e2fd41beaad-kube-api-access-pbpjs\") pod \"machine-config-operator-74547568cd-kckfv\" (UID: \"e957ffda-f443-4f2b-9a8e-4e2fd41beaad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kckfv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.789511 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7509910c-9915-4f07-80a6-d0b1eccd9213-srv-cert\") pod \"olm-operator-6b444d44fb-s68g5\" (UID: \"7509910c-9915-4f07-80a6-d0b1eccd9213\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s68g5" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.789536 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-etcd-client\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.789555 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91c20ddd-76d6-4e47-a24e-ec090ff039de-secret-volume\") pod \"collect-profiles-29537550-hbznb\" (UID: \"91c20ddd-76d6-4e47-a24e-ec090ff039de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-hbznb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.791851 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5xc62"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.792640 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9ab64f63-1297-4556-ae3e-51009cdf2384-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-t8497\" (UID: \"9ab64f63-1297-4556-ae3e-51009cdf2384\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.792680 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/992977b5-9456-46f3-9534-01f21a293ed1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-g2rth\" (UID: \"992977b5-9456-46f3-9534-01f21a293ed1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2rth" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.792700 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/992977b5-9456-46f3-9534-01f21a293ed1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-g2rth\" (UID: \"992977b5-9456-46f3-9534-01f21a293ed1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2rth" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.792735 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-console-serving-cert\") pod \"console-f9d7485db-n8xpb\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.792756 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdztj\" (UniqueName: \"kubernetes.io/projected/671a6723-b559-48d1-957e-a56ee7ef7a64-kube-api-access-sdztj\") pod \"authentication-operator-69f744f599-kct58\" (UID: \"671a6723-b559-48d1-957e-a56ee7ef7a64\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kct58" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.792773 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.792841 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f469fa-831e-4e38-8ace-55fc476a337c-serving-cert\") pod \"openshift-config-operator-7777fb866f-w4sdb\" (UID: \"17f469fa-831e-4e38-8ace-55fc476a337c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4sdb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.792892 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-config\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.792955 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9ab64f63-1297-4556-ae3e-51009cdf2384-encryption-config\") pod \"apiserver-7bbb656c7d-t8497\" (UID: \"9ab64f63-1297-4556-ae3e-51009cdf2384\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.792983 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b75737b1-5468-47d8-ab73-59b1d3a174a3-config\") pod \"machine-approver-56656f9798-wc7xs\" (UID: \"b75737b1-5468-47d8-ab73-59b1d3a174a3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wc7xs" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.793026 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9ab64f63-1297-4556-ae3e-51009cdf2384-audit-policies\") pod \"apiserver-7bbb656c7d-t8497\" (UID: \"9ab64f63-1297-4556-ae3e-51009cdf2384\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.793057 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ab64f63-1297-4556-ae3e-51009cdf2384-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-t8497\" (UID: \"9ab64f63-1297-4556-ae3e-51009cdf2384\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.793084 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-console-config\") pod \"console-f9d7485db-n8xpb\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.793107 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-trusted-ca-bundle\") pod \"console-f9d7485db-n8xpb\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.793136 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/671a6723-b559-48d1-957e-a56ee7ef7a64-service-ca-bundle\") pod \"authentication-operator-69f744f599-kct58\" (UID: \"671a6723-b559-48d1-957e-a56ee7ef7a64\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kct58" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.794249 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.794271 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-config\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.794360 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-serving-cert\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.794401 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9ab64f63-1297-4556-ae3e-51009cdf2384-audit-dir\") pod \"apiserver-7bbb656c7d-t8497\" (UID: \"9ab64f63-1297-4556-ae3e-51009cdf2384\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.794413 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.794501 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.794630 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.794537 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.794772 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.794423 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91c20ddd-76d6-4e47-a24e-ec090ff039de-config-volume\") pod \"collect-profiles-29537550-hbznb\" (UID: \"91c20ddd-76d6-4e47-a24e-ec090ff039de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-hbznb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.795270 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-tk557"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.795900 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-rc7jm"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.796646 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5xc62" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.797559 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-tk557" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.797706 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rxdxq"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.798754 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rxdxq" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.806078 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-serving-cert\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.806371 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rc7jm" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.807797 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-etcd-client\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.808585 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tm7nq"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.820531 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bl64c"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.822285 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.822475 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tm7nq" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.822681 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b75737b1-5468-47d8-ab73-59b1d3a174a3-machine-approver-tls\") pod \"machine-approver-56656f9798-wc7xs\" (UID: \"b75737b1-5468-47d8-ab73-59b1d3a174a3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wc7xs" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.823280 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6-images\") pod \"machine-api-operator-5694c8668f-bpskb\" (UID: \"c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bpskb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.824603 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bpskb\" (UID: \"c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bpskb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.825510 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.838420 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.838714 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.839312 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.839566 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9ab64f63-1297-4556-ae3e-51009cdf2384-etcd-client\") pod \"apiserver-7bbb656c7d-t8497\" (UID: \"9ab64f63-1297-4556-ae3e-51009cdf2384\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.839633 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.839639 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nvp2\" (UniqueName: \"kubernetes.io/projected/17f469fa-831e-4e38-8ace-55fc476a337c-kube-api-access-5nvp2\") pod \"openshift-config-operator-7777fb866f-w4sdb\" (UID: \"17f469fa-831e-4e38-8ace-55fc476a337c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4sdb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.839677 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6-config\") pod \"machine-api-operator-5694c8668f-bpskb\" (UID: \"c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bpskb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.839701 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tl2rx\" (UniqueName: \"kubernetes.io/projected/87307a00-6574-43d3-b6d8-5b5ee80ce95a-kube-api-access-tl2rx\") pod \"kube-storage-version-migrator-operator-b67b599dd-rjr6k\" (UID: \"87307a00-6574-43d3-b6d8-5b5ee80ce95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjr6k" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.839731 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.839754 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e957ffda-f443-4f2b-9a8e-4e2fd41beaad-images\") pod \"machine-config-operator-74547568cd-kckfv\" (UID: \"e957ffda-f443-4f2b-9a8e-4e2fd41beaad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kckfv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.839777 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.839830 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-audit\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.839858 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-node-pullsecrets\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.839881 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.839906 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcmdq\" (UniqueName: \"kubernetes.io/projected/7509910c-9915-4f07-80a6-d0b1eccd9213-kube-api-access-lcmdq\") pod \"olm-operator-6b444d44fb-s68g5\" (UID: \"7509910c-9915-4f07-80a6-d0b1eccd9213\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s68g5" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.839929 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/671a6723-b559-48d1-957e-a56ee7ef7a64-config\") pod \"authentication-operator-69f744f599-kct58\" (UID: \"671a6723-b559-48d1-957e-a56ee7ef7a64\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kct58" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.839950 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-audit-dir\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.839968 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840038 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcs5x\" (UniqueName: \"kubernetes.io/projected/9ab64f63-1297-4556-ae3e-51009cdf2384-kube-api-access-zcs5x\") pod \"apiserver-7bbb656c7d-t8497\" (UID: \"9ab64f63-1297-4556-ae3e-51009cdf2384\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840062 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnb99\" (UniqueName: \"kubernetes.io/projected/98477023-48bc-48a1-a641-dafcc6b08624-kube-api-access-pnb99\") pod \"cluster-samples-operator-665b6dd947-mt6mh\" (UID: \"98477023-48bc-48a1-a641-dafcc6b08624\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mt6mh" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840107 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-audit-policies\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840146 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-encryption-config\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840169 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87307a00-6574-43d3-b6d8-5b5ee80ce95a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rjr6k\" (UID: \"87307a00-6574-43d3-b6d8-5b5ee80ce95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjr6k" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840189 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840213 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98v2b\" (UniqueName: \"kubernetes.io/projected/8bf7d4c6-1fd5-4fa4-a7a3-bf5af08d7eba-kube-api-access-98v2b\") pod \"control-plane-machine-set-operator-78cbb6b69f-z87qr\" (UID: \"8bf7d4c6-1fd5-4fa4-a7a3-bf5af08d7eba\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z87qr" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840232 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/560e68a9-862a-4814-a55d-4ea3e9932ea3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zlklt\" (UID: \"560e68a9-862a-4814-a55d-4ea3e9932ea3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zlklt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840259 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e957ffda-f443-4f2b-9a8e-4e2fd41beaad-proxy-tls\") pod \"machine-config-operator-74547568cd-kckfv\" (UID: \"e957ffda-f443-4f2b-9a8e-4e2fd41beaad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kckfv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840288 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/98477023-48bc-48a1-a641-dafcc6b08624-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mt6mh\" (UID: \"98477023-48bc-48a1-a641-dafcc6b08624\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mt6mh" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840326 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/671a6723-b559-48d1-957e-a56ee7ef7a64-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kct58\" (UID: \"671a6723-b559-48d1-957e-a56ee7ef7a64\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kct58" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840349 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2cqs\" (UniqueName: \"kubernetes.io/projected/9f80824d-7fc7-44e3-982c-2856a99523be-kube-api-access-p2cqs\") pod \"downloads-7954f5f757-cghpw\" (UID: \"9f80824d-7fc7-44e3-982c-2856a99523be\") " pod="openshift-console/downloads-7954f5f757-cghpw" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840387 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-oauth-serving-cert\") pod \"console-f9d7485db-n8xpb\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840406 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840430 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/992977b5-9456-46f3-9534-01f21a293ed1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-g2rth\" (UID: \"992977b5-9456-46f3-9534-01f21a293ed1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2rth" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840447 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnh9l\" (UniqueName: \"kubernetes.io/projected/5560e581-fc17-4214-bd6d-2f2332633891-kube-api-access-gnh9l\") pod \"dns-operator-744455d44c-cslg4\" (UID: \"5560e581-fc17-4214-bd6d-2f2332633891\") " pod="openshift-dns-operator/dns-operator-744455d44c-cslg4" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840465 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840488 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk76x\" (UniqueName: \"kubernetes.io/projected/560e68a9-862a-4814-a55d-4ea3e9932ea3-kube-api-access-xk76x\") pod \"openshift-apiserver-operator-796bbdcf4f-zlklt\" (UID: \"560e68a9-862a-4814-a55d-4ea3e9932ea3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zlklt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840508 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv7rd\" (UniqueName: \"kubernetes.io/projected/b75737b1-5468-47d8-ab73-59b1d3a174a3-kube-api-access-tv7rd\") pod \"machine-approver-56656f9798-wc7xs\" (UID: \"b75737b1-5468-47d8-ab73-59b1d3a174a3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wc7xs" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840527 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnzgw\" (UniqueName: \"kubernetes.io/projected/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-kube-api-access-cnzgw\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840552 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ab64f63-1297-4556-ae3e-51009cdf2384-serving-cert\") pod \"apiserver-7bbb656c7d-t8497\" (UID: \"9ab64f63-1297-4556-ae3e-51009cdf2384\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840568 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b75737b1-5468-47d8-ab73-59b1d3a174a3-auth-proxy-config\") pod \"machine-approver-56656f9798-wc7xs\" (UID: \"b75737b1-5468-47d8-ab73-59b1d3a174a3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wc7xs" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840585 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e957ffda-f443-4f2b-9a8e-4e2fd41beaad-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kckfv\" (UID: \"e957ffda-f443-4f2b-9a8e-4e2fd41beaad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kckfv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840603 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840624 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5560e581-fc17-4214-bd6d-2f2332633891-metrics-tls\") pod \"dns-operator-744455d44c-cslg4\" (UID: \"5560e581-fc17-4214-bd6d-2f2332633891\") " pod="openshift-dns-operator/dns-operator-744455d44c-cslg4" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840645 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8bf7d4c6-1fd5-4fa4-a7a3-bf5af08d7eba-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-z87qr\" (UID: \"8bf7d4c6-1fd5-4fa4-a7a3-bf5af08d7eba\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z87qr" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840681 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/560e68a9-862a-4814-a55d-4ea3e9932ea3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zlklt\" (UID: \"560e68a9-862a-4814-a55d-4ea3e9932ea3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zlklt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840702 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-image-import-ca\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840721 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840737 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840758 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/671a6723-b559-48d1-957e-a56ee7ef7a64-serving-cert\") pod \"authentication-operator-69f744f599-kct58\" (UID: \"671a6723-b559-48d1-957e-a56ee7ef7a64\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kct58" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840775 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840821 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-audit-dir\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840842 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv785\" (UniqueName: \"kubernetes.io/projected/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-kube-api-access-fv785\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.840860 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx42m\" (UniqueName: \"kubernetes.io/projected/91c20ddd-76d6-4e47-a24e-ec090ff039de-kube-api-access-hx42m\") pod \"collect-profiles-29537550-hbznb\" (UID: \"91c20ddd-76d6-4e47-a24e-ec090ff039de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-hbznb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.841095 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.841310 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ddxr6"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.842077 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-x5xfl"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.842257 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6-images\") pod \"machine-api-operator-5694c8668f-bpskb\" (UID: \"c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bpskb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.842424 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.842585 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.842688 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.839640 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.843603 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x5xfl" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.843853 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.844082 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ddxr6" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.844280 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6-config\") pod \"machine-api-operator-5694c8668f-bpskb\" (UID: \"c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bpskb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.844746 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.844913 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-node-pullsecrets\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.845048 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-audit-dir\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.845728 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wlspw"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.845986 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-image-import-ca\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.846063 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.846126 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.846504 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wlspw" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.847208 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-audit\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.849415 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.849698 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.849823 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.849936 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.850127 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.850243 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.850328 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.850664 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.850964 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.852238 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sm9r4"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.852928 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.853017 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pnbpp"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.853072 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.853236 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.853367 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.853464 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pnbpp" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.853535 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.853871 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.855700 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-bpskb\" (UID: \"c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bpskb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.856763 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.857053 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.857201 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.857349 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-trusted-ca-bundle\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.857489 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.857655 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.857738 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.857842 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.857989 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.858127 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.858146 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.858312 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.857495 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.858665 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.858874 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.859009 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-encryption-config\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.859125 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.864229 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.865031 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.870098 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.870557 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njqmz\" (UniqueName: \"kubernetes.io/projected/c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6-kube-api-access-njqmz\") pod \"machine-api-operator-5694c8668f-bpskb\" (UID: \"c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-bpskb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.878121 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5llk"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.879451 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5llk" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.879727 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-h4z55"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.880667 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jplqc"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.881518 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h4z55" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.881910 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jplqc" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.884537 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wxczw"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.885200 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wxczw" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.888408 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zmwn2"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.890125 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.891737 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.893861 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vzl28"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.894543 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zmwn2" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.894650 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vzl28" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.898233 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bpskb"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.899243 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537556-wwqxk"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.900018 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537556-wwqxk" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.901025 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zlklt"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.903980 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-488hv"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.908532 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.908584 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kct58"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.912705 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fkqnd"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.912713 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.927461 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537550-hbznb"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.931169 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.931631 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z87qr"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.941983 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9ab64f63-1297-4556-ae3e-51009cdf2384-etcd-client\") pod \"apiserver-7bbb656c7d-t8497\" (UID: \"9ab64f63-1297-4556-ae3e-51009cdf2384\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.942020 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nvp2\" (UniqueName: \"kubernetes.io/projected/17f469fa-831e-4e38-8ace-55fc476a337c-kube-api-access-5nvp2\") pod \"openshift-config-operator-7777fb866f-w4sdb\" (UID: \"17f469fa-831e-4e38-8ace-55fc476a337c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4sdb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.942046 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkgtv\" (UniqueName: \"kubernetes.io/projected/8c0701a3-3ba4-42cc-b570-bb688909e07d-kube-api-access-hkgtv\") pod \"package-server-manager-789f6589d5-q5llk\" (UID: \"8c0701a3-3ba4-42cc-b570-bb688909e07d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5llk" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.942068 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tl2rx\" (UniqueName: \"kubernetes.io/projected/87307a00-6574-43d3-b6d8-5b5ee80ce95a-kube-api-access-tl2rx\") pod \"kube-storage-version-migrator-operator-b67b599dd-rjr6k\" (UID: \"87307a00-6574-43d3-b6d8-5b5ee80ce95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjr6k" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.942089 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e957ffda-f443-4f2b-9a8e-4e2fd41beaad-images\") pod \"machine-config-operator-74547568cd-kckfv\" (UID: \"e957ffda-f443-4f2b-9a8e-4e2fd41beaad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kckfv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.942107 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.942125 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.942142 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcmdq\" (UniqueName: \"kubernetes.io/projected/7509910c-9915-4f07-80a6-d0b1eccd9213-kube-api-access-lcmdq\") pod \"olm-operator-6b444d44fb-s68g5\" (UID: \"7509910c-9915-4f07-80a6-d0b1eccd9213\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s68g5" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.942163 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcs5x\" (UniqueName: \"kubernetes.io/projected/9ab64f63-1297-4556-ae3e-51009cdf2384-kube-api-access-zcs5x\") pod \"apiserver-7bbb656c7d-t8497\" (UID: \"9ab64f63-1297-4556-ae3e-51009cdf2384\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.942182 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnb99\" (UniqueName: \"kubernetes.io/projected/98477023-48bc-48a1-a641-dafcc6b08624-kube-api-access-pnb99\") pod \"cluster-samples-operator-665b6dd947-mt6mh\" (UID: \"98477023-48bc-48a1-a641-dafcc6b08624\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mt6mh" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.942200 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/671a6723-b559-48d1-957e-a56ee7ef7a64-config\") pod \"authentication-operator-69f744f599-kct58\" (UID: \"671a6723-b559-48d1-957e-a56ee7ef7a64\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kct58" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.942258 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-audit-dir\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.942329 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.942360 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3f5d06e-976d-42ce-9693-bc41c2ee9154-config\") pod \"kube-controller-manager-operator-78b949d7b-tm7nq\" (UID: \"c3f5d06e-976d-42ce-9693-bc41c2ee9154\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tm7nq" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.942388 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-audit-policies\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.942410 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3f5d06e-976d-42ce-9693-bc41c2ee9154-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tm7nq\" (UID: \"c3f5d06e-976d-42ce-9693-bc41c2ee9154\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tm7nq" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.942764 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-audit-dir\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.942953 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87307a00-6574-43d3-b6d8-5b5ee80ce95a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rjr6k\" (UID: \"87307a00-6574-43d3-b6d8-5b5ee80ce95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjr6k" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943002 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943025 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98v2b\" (UniqueName: \"kubernetes.io/projected/8bf7d4c6-1fd5-4fa4-a7a3-bf5af08d7eba-kube-api-access-98v2b\") pod \"control-plane-machine-set-operator-78cbb6b69f-z87qr\" (UID: \"8bf7d4c6-1fd5-4fa4-a7a3-bf5af08d7eba\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z87qr" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943044 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/560e68a9-862a-4814-a55d-4ea3e9932ea3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zlklt\" (UID: \"560e68a9-862a-4814-a55d-4ea3e9932ea3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zlklt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943071 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e957ffda-f443-4f2b-9a8e-4e2fd41beaad-proxy-tls\") pod \"machine-config-operator-74547568cd-kckfv\" (UID: \"e957ffda-f443-4f2b-9a8e-4e2fd41beaad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kckfv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943095 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3f5d06e-976d-42ce-9693-bc41c2ee9154-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tm7nq\" (UID: \"c3f5d06e-976d-42ce-9693-bc41c2ee9154\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tm7nq" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943229 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/98477023-48bc-48a1-a641-dafcc6b08624-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mt6mh\" (UID: \"98477023-48bc-48a1-a641-dafcc6b08624\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mt6mh" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943265 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/671a6723-b559-48d1-957e-a56ee7ef7a64-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kct58\" (UID: \"671a6723-b559-48d1-957e-a56ee7ef7a64\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kct58" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943291 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2cqs\" (UniqueName: \"kubernetes.io/projected/9f80824d-7fc7-44e3-982c-2856a99523be-kube-api-access-p2cqs\") pod \"downloads-7954f5f757-cghpw\" (UID: \"9f80824d-7fc7-44e3-982c-2856a99523be\") " pod="openshift-console/downloads-7954f5f757-cghpw" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943319 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/992977b5-9456-46f3-9534-01f21a293ed1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-g2rth\" (UID: \"992977b5-9456-46f3-9534-01f21a293ed1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2rth" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943372 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-oauth-serving-cert\") pod \"console-f9d7485db-n8xpb\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943403 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943428 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnh9l\" (UniqueName: \"kubernetes.io/projected/5560e581-fc17-4214-bd6d-2f2332633891-kube-api-access-gnh9l\") pod \"dns-operator-744455d44c-cslg4\" (UID: \"5560e581-fc17-4214-bd6d-2f2332633891\") " pod="openshift-dns-operator/dns-operator-744455d44c-cslg4" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943517 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-audit-policies\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943531 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943594 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ab64f63-1297-4556-ae3e-51009cdf2384-serving-cert\") pod \"apiserver-7bbb656c7d-t8497\" (UID: \"9ab64f63-1297-4556-ae3e-51009cdf2384\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943616 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk76x\" (UniqueName: \"kubernetes.io/projected/560e68a9-862a-4814-a55d-4ea3e9932ea3-kube-api-access-xk76x\") pod \"openshift-apiserver-operator-796bbdcf4f-zlklt\" (UID: \"560e68a9-862a-4814-a55d-4ea3e9932ea3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zlklt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943639 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv7rd\" (UniqueName: \"kubernetes.io/projected/b75737b1-5468-47d8-ab73-59b1d3a174a3-kube-api-access-tv7rd\") pod \"machine-approver-56656f9798-wc7xs\" (UID: \"b75737b1-5468-47d8-ab73-59b1d3a174a3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wc7xs" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943659 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnzgw\" (UniqueName: \"kubernetes.io/projected/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-kube-api-access-cnzgw\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943684 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5560e581-fc17-4214-bd6d-2f2332633891-metrics-tls\") pod \"dns-operator-744455d44c-cslg4\" (UID: \"5560e581-fc17-4214-bd6d-2f2332633891\") " pod="openshift-dns-operator/dns-operator-744455d44c-cslg4" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943713 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b75737b1-5468-47d8-ab73-59b1d3a174a3-auth-proxy-config\") pod \"machine-approver-56656f9798-wc7xs\" (UID: \"b75737b1-5468-47d8-ab73-59b1d3a174a3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wc7xs" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943760 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e957ffda-f443-4f2b-9a8e-4e2fd41beaad-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kckfv\" (UID: \"e957ffda-f443-4f2b-9a8e-4e2fd41beaad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kckfv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943790 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943840 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8bf7d4c6-1fd5-4fa4-a7a3-bf5af08d7eba-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-z87qr\" (UID: \"8bf7d4c6-1fd5-4fa4-a7a3-bf5af08d7eba\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z87qr" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943887 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/560e68a9-862a-4814-a55d-4ea3e9932ea3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zlklt\" (UID: \"560e68a9-862a-4814-a55d-4ea3e9932ea3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zlklt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.943985 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7c8bd670-9de4-422c-9ff3-12f776fbc47f-proxy-tls\") pod \"machine-config-controller-84d6567774-x5xfl\" (UID: \"7c8bd670-9de4-422c-9ff3-12f776fbc47f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x5xfl" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944322 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944368 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/671a6723-b559-48d1-957e-a56ee7ef7a64-config\") pod \"authentication-operator-69f744f599-kct58\" (UID: \"671a6723-b559-48d1-957e-a56ee7ef7a64\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kct58" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944345 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944413 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7c8bd670-9de4-422c-9ff3-12f776fbc47f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-x5xfl\" (UID: \"7c8bd670-9de4-422c-9ff3-12f776fbc47f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x5xfl" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944438 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/671a6723-b559-48d1-957e-a56ee7ef7a64-serving-cert\") pod \"authentication-operator-69f744f599-kct58\" (UID: \"671a6723-b559-48d1-957e-a56ee7ef7a64\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kct58" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944472 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944515 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx42m\" (UniqueName: \"kubernetes.io/projected/91c20ddd-76d6-4e47-a24e-ec090ff039de-kube-api-access-hx42m\") pod \"collect-profiles-29537550-hbznb\" (UID: \"91c20ddd-76d6-4e47-a24e-ec090ff039de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-hbznb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944539 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87307a00-6574-43d3-b6d8-5b5ee80ce95a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rjr6k\" (UID: \"87307a00-6574-43d3-b6d8-5b5ee80ce95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjr6k" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944559 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-console-oauth-config\") pod \"console-f9d7485db-n8xpb\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944582 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c0701a3-3ba4-42cc-b570-bb688909e07d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-q5llk\" (UID: \"8c0701a3-3ba4-42cc-b570-bb688909e07d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5llk" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944600 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsfgw\" (UniqueName: \"kubernetes.io/projected/992977b5-9456-46f3-9534-01f21a293ed1-kube-api-access-lsfgw\") pod \"cluster-image-registry-operator-dc59b4c8b-g2rth\" (UID: \"992977b5-9456-46f3-9534-01f21a293ed1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2rth" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944617 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-service-ca\") pod \"console-f9d7485db-n8xpb\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944635 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/17f469fa-831e-4e38-8ace-55fc476a337c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-w4sdb\" (UID: \"17f469fa-831e-4e38-8ace-55fc476a337c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4sdb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944653 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7509910c-9915-4f07-80a6-d0b1eccd9213-profile-collector-cert\") pod \"olm-operator-6b444d44fb-s68g5\" (UID: \"7509910c-9915-4f07-80a6-d0b1eccd9213\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s68g5" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944675 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c64w8\" (UniqueName: \"kubernetes.io/projected/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-kube-api-access-c64w8\") pod \"console-f9d7485db-n8xpb\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944723 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbpjs\" (UniqueName: \"kubernetes.io/projected/e957ffda-f443-4f2b-9a8e-4e2fd41beaad-kube-api-access-pbpjs\") pod \"machine-config-operator-74547568cd-kckfv\" (UID: \"e957ffda-f443-4f2b-9a8e-4e2fd41beaad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kckfv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944746 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7509910c-9915-4f07-80a6-d0b1eccd9213-srv-cert\") pod \"olm-operator-6b444d44fb-s68g5\" (UID: \"7509910c-9915-4f07-80a6-d0b1eccd9213\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s68g5" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944765 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91c20ddd-76d6-4e47-a24e-ec090ff039de-secret-volume\") pod \"collect-profiles-29537550-hbznb\" (UID: \"91c20ddd-76d6-4e47-a24e-ec090ff039de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-hbznb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944793 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9ab64f63-1297-4556-ae3e-51009cdf2384-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-t8497\" (UID: \"9ab64f63-1297-4556-ae3e-51009cdf2384\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944828 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-589ks\" (UniqueName: \"kubernetes.io/projected/7c8bd670-9de4-422c-9ff3-12f776fbc47f-kube-api-access-589ks\") pod \"machine-config-controller-84d6567774-x5xfl\" (UID: \"7c8bd670-9de4-422c-9ff3-12f776fbc47f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x5xfl" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944851 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/992977b5-9456-46f3-9534-01f21a293ed1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-g2rth\" (UID: \"992977b5-9456-46f3-9534-01f21a293ed1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2rth" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944867 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/992977b5-9456-46f3-9534-01f21a293ed1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-g2rth\" (UID: \"992977b5-9456-46f3-9534-01f21a293ed1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2rth" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944886 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-console-serving-cert\") pod \"console-f9d7485db-n8xpb\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944903 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdztj\" (UniqueName: \"kubernetes.io/projected/671a6723-b559-48d1-957e-a56ee7ef7a64-kube-api-access-sdztj\") pod \"authentication-operator-69f744f599-kct58\" (UID: \"671a6723-b559-48d1-957e-a56ee7ef7a64\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kct58" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944920 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944938 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f469fa-831e-4e38-8ace-55fc476a337c-serving-cert\") pod \"openshift-config-operator-7777fb866f-w4sdb\" (UID: \"17f469fa-831e-4e38-8ace-55fc476a337c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4sdb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944964 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9ab64f63-1297-4556-ae3e-51009cdf2384-encryption-config\") pod \"apiserver-7bbb656c7d-t8497\" (UID: \"9ab64f63-1297-4556-ae3e-51009cdf2384\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.944984 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b75737b1-5468-47d8-ab73-59b1d3a174a3-config\") pod \"machine-approver-56656f9798-wc7xs\" (UID: \"b75737b1-5468-47d8-ab73-59b1d3a174a3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wc7xs" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.945004 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9ab64f63-1297-4556-ae3e-51009cdf2384-audit-policies\") pod \"apiserver-7bbb656c7d-t8497\" (UID: \"9ab64f63-1297-4556-ae3e-51009cdf2384\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.945037 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2rth"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.945067 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ab64f63-1297-4556-ae3e-51009cdf2384-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-t8497\" (UID: \"9ab64f63-1297-4556-ae3e-51009cdf2384\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.945090 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-console-config\") pod \"console-f9d7485db-n8xpb\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.945106 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-trusted-ca-bundle\") pod \"console-f9d7485db-n8xpb\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.945122 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/671a6723-b559-48d1-957e-a56ee7ef7a64-service-ca-bundle\") pod \"authentication-operator-69f744f599-kct58\" (UID: \"671a6723-b559-48d1-957e-a56ee7ef7a64\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kct58" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.945151 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9ab64f63-1297-4556-ae3e-51009cdf2384-audit-dir\") pod \"apiserver-7bbb656c7d-t8497\" (UID: \"9ab64f63-1297-4556-ae3e-51009cdf2384\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.945174 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91c20ddd-76d6-4e47-a24e-ec090ff039de-config-volume\") pod \"collect-profiles-29537550-hbznb\" (UID: \"91c20ddd-76d6-4e47-a24e-ec090ff039de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-hbznb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.945193 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b75737b1-5468-47d8-ab73-59b1d3a174a3-machine-approver-tls\") pod \"machine-approver-56656f9798-wc7xs\" (UID: \"b75737b1-5468-47d8-ab73-59b1d3a174a3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wc7xs" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.945888 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.946078 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cslg4"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.946308 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.946935 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/671a6723-b559-48d1-957e-a56ee7ef7a64-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kct58\" (UID: \"671a6723-b559-48d1-957e-a56ee7ef7a64\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kct58" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.947214 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-oauth-serving-cert\") pod \"console-f9d7485db-n8xpb\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.947309 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-dpwrd"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.947643 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/992977b5-9456-46f3-9534-01f21a293ed1-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-g2rth\" (UID: \"992977b5-9456-46f3-9534-01f21a293ed1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2rth" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.948578 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.948622 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-dpwrd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.948707 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/9ab64f63-1297-4556-ae3e-51009cdf2384-etcd-client\") pod \"apiserver-7bbb656c7d-t8497\" (UID: \"9ab64f63-1297-4556-ae3e-51009cdf2384\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.949199 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.949492 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/560e68a9-862a-4814-a55d-4ea3e9932ea3-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-zlklt\" (UID: \"560e68a9-862a-4814-a55d-4ea3e9932ea3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zlklt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.949769 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.950044 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-service-ca\") pod \"console-f9d7485db-n8xpb\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.950270 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/9ab64f63-1297-4556-ae3e-51009cdf2384-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-t8497\" (UID: \"9ab64f63-1297-4556-ae3e-51009cdf2384\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.950355 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/87307a00-6574-43d3-b6d8-5b5ee80ce95a-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rjr6k\" (UID: \"87307a00-6574-43d3-b6d8-5b5ee80ce95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjr6k" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.950721 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/17f469fa-831e-4e38-8ace-55fc476a337c-available-featuregates\") pod \"openshift-config-operator-7777fb866f-w4sdb\" (UID: \"17f469fa-831e-4e38-8ace-55fc476a337c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4sdb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.951184 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/98477023-48bc-48a1-a641-dafcc6b08624-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mt6mh\" (UID: \"98477023-48bc-48a1-a641-dafcc6b08624\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mt6mh" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.951776 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-console-serving-cert\") pod \"console-f9d7485db-n8xpb\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.951920 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/560e68a9-862a-4814-a55d-4ea3e9932ea3-config\") pod \"openshift-apiserver-operator-796bbdcf4f-zlklt\" (UID: \"560e68a9-862a-4814-a55d-4ea3e9932ea3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zlklt" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.952519 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b75737b1-5468-47d8-ab73-59b1d3a174a3-auth-proxy-config\") pod \"machine-approver-56656f9798-wc7xs\" (UID: \"b75737b1-5468-47d8-ab73-59b1d3a174a3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wc7xs" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.952635 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9ab64f63-1297-4556-ae3e-51009cdf2384-serving-cert\") pod \"apiserver-7bbb656c7d-t8497\" (UID: \"9ab64f63-1297-4556-ae3e-51009cdf2384\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.953078 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e957ffda-f443-4f2b-9a8e-4e2fd41beaad-auth-proxy-config\") pod \"machine-config-operator-74547568cd-kckfv\" (UID: \"e957ffda-f443-4f2b-9a8e-4e2fd41beaad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kckfv" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.953075 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/b75737b1-5468-47d8-ab73-59b1d3a174a3-machine-approver-tls\") pod \"machine-approver-56656f9798-wc7xs\" (UID: \"b75737b1-5468-47d8-ab73-59b1d3a174a3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wc7xs" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.953462 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9ab64f63-1297-4556-ae3e-51009cdf2384-audit-policies\") pod \"apiserver-7bbb656c7d-t8497\" (UID: \"9ab64f63-1297-4556-ae3e-51009cdf2384\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.953494 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9ab64f63-1297-4556-ae3e-51009cdf2384-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-t8497\" (UID: \"9ab64f63-1297-4556-ae3e-51009cdf2384\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.954090 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/992977b5-9456-46f3-9534-01f21a293ed1-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-g2rth\" (UID: \"992977b5-9456-46f3-9534-01f21a293ed1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2rth" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.954201 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-tk557"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.954240 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-console-config\") pod \"console-f9d7485db-n8xpb\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.954446 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5560e581-fc17-4214-bd6d-2f2332633891-metrics-tls\") pod \"dns-operator-744455d44c-cslg4\" (UID: \"5560e581-fc17-4214-bd6d-2f2332633891\") " pod="openshift-dns-operator/dns-operator-744455d44c-cslg4" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.954516 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9ab64f63-1297-4556-ae3e-51009cdf2384-audit-dir\") pod \"apiserver-7bbb656c7d-t8497\" (UID: \"9ab64f63-1297-4556-ae3e-51009cdf2384\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.955013 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/671a6723-b559-48d1-957e-a56ee7ef7a64-service-ca-bundle\") pod \"authentication-operator-69f744f599-kct58\" (UID: \"671a6723-b559-48d1-957e-a56ee7ef7a64\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kct58" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.955173 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7509910c-9915-4f07-80a6-d0b1eccd9213-profile-collector-cert\") pod \"olm-operator-6b444d44fb-s68g5\" (UID: \"7509910c-9915-4f07-80a6-d0b1eccd9213\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s68g5" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.956317 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b75737b1-5468-47d8-ab73-59b1d3a174a3-config\") pod \"machine-approver-56656f9798-wc7xs\" (UID: \"b75737b1-5468-47d8-ab73-59b1d3a174a3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wc7xs" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.956480 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-trusted-ca-bundle\") pod \"console-f9d7485db-n8xpb\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.956693 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87307a00-6574-43d3-b6d8-5b5ee80ce95a-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rjr6k\" (UID: \"87307a00-6574-43d3-b6d8-5b5ee80ce95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjr6k" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.957174 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91c20ddd-76d6-4e47-a24e-ec090ff039de-config-volume\") pod \"collect-profiles-29537550-hbznb\" (UID: \"91c20ddd-76d6-4e47-a24e-ec090ff039de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-hbznb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.957512 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7509910c-9915-4f07-80a6-d0b1eccd9213-srv-cert\") pod \"olm-operator-6b444d44fb-s68g5\" (UID: \"7509910c-9915-4f07-80a6-d0b1eccd9213\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s68g5" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.957576 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.957844 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.958401 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/671a6723-b559-48d1-957e-a56ee7ef7a64-serving-cert\") pod \"authentication-operator-69f744f599-kct58\" (UID: \"671a6723-b559-48d1-957e-a56ee7ef7a64\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kct58" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.963687 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.968989 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.970423 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/9ab64f63-1297-4556-ae3e-51009cdf2384-encryption-config\") pod \"apiserver-7bbb656c7d-t8497\" (UID: \"9ab64f63-1297-4556-ae3e-51009cdf2384\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.970821 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.973076 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjr6k"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.974919 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ddxr6"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.976837 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s68g5"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.977326 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.977772 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.978341 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91c20ddd-76d6-4e47-a24e-ec090ff039de-secret-volume\") pod \"collect-profiles-29537550-hbznb\" (UID: \"91c20ddd-76d6-4e47-a24e-ec090ff039de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-hbznb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.981718 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-cghpw"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.981906 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sm9r4"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.982015 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-w4sdb"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.983201 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.983487 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17f469fa-831e-4e38-8ace-55fc476a337c-serving-cert\") pod \"openshift-config-operator-7777fb866f-w4sdb\" (UID: \"17f469fa-831e-4e38-8ace-55fc476a337c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4sdb" Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.984443 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bl64c"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.985482 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-66gzd"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.988592 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5llk"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.988695 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wtnl5"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.990192 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-x5xfl"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.994689 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-n8xpb"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.994911 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mt6mh"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.994998 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537556-wwqxk"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.999734 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rxdxq"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.999862 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-rc7jm"] Feb 28 04:37:15 crc kubenswrapper[5014]: I0228 04:37:15.999949 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-tjhpt"] Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.001072 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tjhpt" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.005059 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-bpskb" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.006164 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.006604 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-mmkz2"] Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.007293 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5xc62"] Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.007311 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt"] Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.007382 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-mmkz2" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.011351 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zmwn2"] Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.011418 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tm7nq"] Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.011464 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vzl28"] Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.012718 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-console-oauth-config\") pod \"console-f9d7485db-n8xpb\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.012870 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jplqc"] Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.013446 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.013951 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wlspw"] Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.015187 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kckfv"] Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.016615 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wxczw"] Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.017430 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pnbpp"] Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.018457 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-dpwrd"] Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.019513 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-mmkz2"] Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.020680 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-fxcmt"] Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.022188 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-fxcmt"] Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.022328 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.028754 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.046797 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7c8bd670-9de4-422c-9ff3-12f776fbc47f-proxy-tls\") pod \"machine-config-controller-84d6567774-x5xfl\" (UID: \"7c8bd670-9de4-422c-9ff3-12f776fbc47f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x5xfl" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.046857 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7c8bd670-9de4-422c-9ff3-12f776fbc47f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-x5xfl\" (UID: \"7c8bd670-9de4-422c-9ff3-12f776fbc47f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x5xfl" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.046963 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c0701a3-3ba4-42cc-b570-bb688909e07d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-q5llk\" (UID: \"8c0701a3-3ba4-42cc-b570-bb688909e07d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5llk" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.047093 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-589ks\" (UniqueName: \"kubernetes.io/projected/7c8bd670-9de4-422c-9ff3-12f776fbc47f-kube-api-access-589ks\") pod \"machine-config-controller-84d6567774-x5xfl\" (UID: \"7c8bd670-9de4-422c-9ff3-12f776fbc47f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x5xfl" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.047150 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkgtv\" (UniqueName: \"kubernetes.io/projected/8c0701a3-3ba4-42cc-b570-bb688909e07d-kube-api-access-hkgtv\") pod \"package-server-manager-789f6589d5-q5llk\" (UID: \"8c0701a3-3ba4-42cc-b570-bb688909e07d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5llk" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.047197 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3f5d06e-976d-42ce-9693-bc41c2ee9154-config\") pod \"kube-controller-manager-operator-78b949d7b-tm7nq\" (UID: \"c3f5d06e-976d-42ce-9693-bc41c2ee9154\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tm7nq" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.047227 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3f5d06e-976d-42ce-9693-bc41c2ee9154-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tm7nq\" (UID: \"c3f5d06e-976d-42ce-9693-bc41c2ee9154\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tm7nq" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.047285 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3f5d06e-976d-42ce-9693-bc41c2ee9154-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tm7nq\" (UID: \"c3f5d06e-976d-42ce-9693-bc41c2ee9154\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tm7nq" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.048284 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7c8bd670-9de4-422c-9ff3-12f776fbc47f-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-x5xfl\" (UID: \"7c8bd670-9de4-422c-9ff3-12f776fbc47f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x5xfl" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.049995 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.069020 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.089989 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.102601 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/8bf7d4c6-1fd5-4fa4-a7a3-bf5af08d7eba-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-z87qr\" (UID: \"8bf7d4c6-1fd5-4fa4-a7a3-bf5af08d7eba\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z87qr" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.111459 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.119573 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.134091 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.149845 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.169727 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.176322 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e957ffda-f443-4f2b-9a8e-4e2fd41beaad-proxy-tls\") pod \"machine-config-operator-74547568cd-kckfv\" (UID: \"e957ffda-f443-4f2b-9a8e-4e2fd41beaad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kckfv" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.189763 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.204674 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-bpskb"] Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.209636 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 28 04:37:16 crc kubenswrapper[5014]: W0228 04:37:16.211504 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc33c5a22_b0a7_4e91_9d5b_28b9908fbfd6.slice/crio-0d109874179fade71bd250ee075fab8f23b44bb7403f1351abb2bb3438b43e92 WatchSource:0}: Error finding container 0d109874179fade71bd250ee075fab8f23b44bb7403f1351abb2bb3438b43e92: Status 404 returned error can't find the container with id 0d109874179fade71bd250ee075fab8f23b44bb7403f1351abb2bb3438b43e92 Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.213214 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e957ffda-f443-4f2b-9a8e-4e2fd41beaad-images\") pod \"machine-config-operator-74547568cd-kckfv\" (UID: \"e957ffda-f443-4f2b-9a8e-4e2fd41beaad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kckfv" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.229672 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.239585 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bpskb" event={"ID":"c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6","Type":"ContainerStarted","Data":"0d109874179fade71bd250ee075fab8f23b44bb7403f1351abb2bb3438b43e92"} Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.250266 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.269343 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.289832 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.309992 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.329663 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.350662 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.377047 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.390130 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.409864 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.429798 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.449683 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.464605 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7c8bd670-9de4-422c-9ff3-12f776fbc47f-proxy-tls\") pod \"machine-config-controller-84d6567774-x5xfl\" (UID: \"7c8bd670-9de4-422c-9ff3-12f776fbc47f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x5xfl" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.470081 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.489339 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.509929 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.530615 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.559923 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.568414 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.588996 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.610388 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.631499 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.650333 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.661858 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3f5d06e-976d-42ce-9693-bc41c2ee9154-config\") pod \"kube-controller-manager-operator-78b949d7b-tm7nq\" (UID: \"c3f5d06e-976d-42ce-9693-bc41c2ee9154\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tm7nq" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.679880 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.692666 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3f5d06e-976d-42ce-9693-bc41c2ee9154-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tm7nq\" (UID: \"c3f5d06e-976d-42ce-9693-bc41c2ee9154\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tm7nq" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.710707 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.717639 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv785\" (UniqueName: \"kubernetes.io/projected/0060d8e2-8ffe-4a64-9109-57cb6f97ec0e-kube-api-access-fv785\") pod \"apiserver-76f77b778f-488hv\" (UID: \"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e\") " pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.730205 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.750556 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.769065 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.789831 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.834127 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.849478 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.868152 5014 request.go:700] Waited for 1.013068137s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dimage-registry-tls&limit=500&resourceVersion=0 Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.870575 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.889399 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.909170 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.910404 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.931562 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.949965 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.969424 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 28 04:37:16 crc kubenswrapper[5014]: I0228 04:37:16.990489 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.009993 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.030032 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.045600 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c0701a3-3ba4-42cc-b570-bb688909e07d-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-q5llk\" (UID: \"8c0701a3-3ba4-42cc-b570-bb688909e07d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5llk" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.050686 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.069840 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.091181 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.110881 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.131090 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.131461 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-488hv"] Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.149185 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.170355 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.189966 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.229992 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.249149 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bpskb" event={"ID":"c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6","Type":"ContainerStarted","Data":"7c20ced50ff58f029f588f81d3dbb108d7ba7335eb649545874959e06c5e61c1"} Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.249219 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-bpskb" event={"ID":"c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6","Type":"ContainerStarted","Data":"e77c17ee2848b1631535b03115dc7c17074315479dbcf891f50ecdadd844a8c5"} Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.249740 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.251721 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-488hv" event={"ID":"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e","Type":"ContainerStarted","Data":"c13e2924b080a97fec3aa0e1b0a934fd3889b2da7f9b248ecce2e5865dc33ff8"} Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.269921 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.295508 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.309266 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.329631 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.350266 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.369696 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.390689 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.412239 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.429486 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.449960 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.469515 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.490078 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.511139 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.530493 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.549665 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.586345 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nvp2\" (UniqueName: \"kubernetes.io/projected/17f469fa-831e-4e38-8ace-55fc476a337c-kube-api-access-5nvp2\") pod \"openshift-config-operator-7777fb866f-w4sdb\" (UID: \"17f469fa-831e-4e38-8ace-55fc476a337c\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4sdb" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.612325 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tl2rx\" (UniqueName: \"kubernetes.io/projected/87307a00-6574-43d3-b6d8-5b5ee80ce95a-kube-api-access-tl2rx\") pod \"kube-storage-version-migrator-operator-b67b599dd-rjr6k\" (UID: \"87307a00-6574-43d3-b6d8-5b5ee80ce95a\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjr6k" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.638500 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnb99\" (UniqueName: \"kubernetes.io/projected/98477023-48bc-48a1-a641-dafcc6b08624-kube-api-access-pnb99\") pod \"cluster-samples-operator-665b6dd947-mt6mh\" (UID: \"98477023-48bc-48a1-a641-dafcc6b08624\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mt6mh" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.645762 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcs5x\" (UniqueName: \"kubernetes.io/projected/9ab64f63-1297-4556-ae3e-51009cdf2384-kube-api-access-zcs5x\") pod \"apiserver-7bbb656c7d-t8497\" (UID: \"9ab64f63-1297-4556-ae3e-51009cdf2384\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.668690 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98v2b\" (UniqueName: \"kubernetes.io/projected/8bf7d4c6-1fd5-4fa4-a7a3-bf5af08d7eba-kube-api-access-98v2b\") pod \"control-plane-machine-set-operator-78cbb6b69f-z87qr\" (UID: \"8bf7d4c6-1fd5-4fa4-a7a3-bf5af08d7eba\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z87qr" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.689017 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcmdq\" (UniqueName: \"kubernetes.io/projected/7509910c-9915-4f07-80a6-d0b1eccd9213-kube-api-access-lcmdq\") pod \"olm-operator-6b444d44fb-s68g5\" (UID: \"7509910c-9915-4f07-80a6-d0b1eccd9213\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s68g5" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.708338 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2cqs\" (UniqueName: \"kubernetes.io/projected/9f80824d-7fc7-44e3-982c-2856a99523be-kube-api-access-p2cqs\") pod \"downloads-7954f5f757-cghpw\" (UID: \"9f80824d-7fc7-44e3-982c-2856a99523be\") " pod="openshift-console/downloads-7954f5f757-cghpw" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.718269 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mt6mh" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.723165 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4sdb" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.726006 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbpjs\" (UniqueName: \"kubernetes.io/projected/e957ffda-f443-4f2b-9a8e-4e2fd41beaad-kube-api-access-pbpjs\") pod \"machine-config-operator-74547568cd-kckfv\" (UID: \"e957ffda-f443-4f2b-9a8e-4e2fd41beaad\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kckfv" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.741397 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kckfv" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.749343 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnh9l\" (UniqueName: \"kubernetes.io/projected/5560e581-fc17-4214-bd6d-2f2332633891-kube-api-access-gnh9l\") pod \"dns-operator-744455d44c-cslg4\" (UID: \"5560e581-fc17-4214-bd6d-2f2332633891\") " pod="openshift-dns-operator/dns-operator-744455d44c-cslg4" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.769937 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.772029 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsfgw\" (UniqueName: \"kubernetes.io/projected/992977b5-9456-46f3-9534-01f21a293ed1-kube-api-access-lsfgw\") pod \"cluster-image-registry-operator-dc59b4c8b-g2rth\" (UID: \"992977b5-9456-46f3-9534-01f21a293ed1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2rth" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.789899 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.809712 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.817435 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s68g5" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.835189 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-cslg4" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.840721 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-cghpw" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.853201 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjr6k" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.856937 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/992977b5-9456-46f3-9534-01f21a293ed1-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-g2rth\" (UID: \"992977b5-9456-46f3-9534-01f21a293ed1\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2rth" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.866989 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.868558 5014 request.go:700] Waited for 1.914951509s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication-operator/serviceaccounts/authentication-operator/token Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.877583 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c64w8\" (UniqueName: \"kubernetes.io/projected/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-kube-api-access-c64w8\") pod \"console-f9d7485db-n8xpb\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.891253 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2rth" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.900181 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdztj\" (UniqueName: \"kubernetes.io/projected/671a6723-b559-48d1-957e-a56ee7ef7a64-kube-api-access-sdztj\") pod \"authentication-operator-69f744f599-kct58\" (UID: \"671a6723-b559-48d1-957e-a56ee7ef7a64\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kct58" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.915184 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv7rd\" (UniqueName: \"kubernetes.io/projected/b75737b1-5468-47d8-ab73-59b1d3a174a3-kube-api-access-tv7rd\") pod \"machine-approver-56656f9798-wc7xs\" (UID: \"b75737b1-5468-47d8-ab73-59b1d3a174a3\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wc7xs" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.929372 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk76x\" (UniqueName: \"kubernetes.io/projected/560e68a9-862a-4814-a55d-4ea3e9932ea3-kube-api-access-xk76x\") pod \"openshift-apiserver-operator-796bbdcf4f-zlklt\" (UID: \"560e68a9-862a-4814-a55d-4ea3e9932ea3\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zlklt" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.939089 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z87qr" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.946846 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnzgw\" (UniqueName: \"kubernetes.io/projected/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-kube-api-access-cnzgw\") pod \"oauth-openshift-558db77b4-fkqnd\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.970065 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.970532 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx42m\" (UniqueName: \"kubernetes.io/projected/91c20ddd-76d6-4e47-a24e-ec090ff039de-kube-api-access-hx42m\") pod \"collect-profiles-29537550-hbznb\" (UID: \"91c20ddd-76d6-4e47-a24e-ec090ff039de\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-hbznb" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.984611 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wc7xs" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.991508 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kct58" Feb 28 04:37:17 crc kubenswrapper[5014]: I0228 04:37:17.992672 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.003283 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mt6mh"] Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.003963 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.010703 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.032939 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.033509 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.039974 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-kckfv"] Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.050214 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.070749 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.091349 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.116890 5014 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.128905 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.148872 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.170893 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-cslg4"] Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.176629 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zlklt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.197867 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-589ks\" (UniqueName: \"kubernetes.io/projected/7c8bd670-9de4-422c-9ff3-12f776fbc47f-kube-api-access-589ks\") pod \"machine-config-controller-84d6567774-x5xfl\" (UID: \"7c8bd670-9de4-422c-9ff3-12f776fbc47f\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x5xfl" Feb 28 04:37:18 crc kubenswrapper[5014]: W0228 04:37:18.213436 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5560e581_fc17_4214_bd6d_2f2332633891.slice/crio-8228fc12cfa6fa8dc25ebe73f29c4c481ba40e2f38cb4795a117802659b0e2d6 WatchSource:0}: Error finding container 8228fc12cfa6fa8dc25ebe73f29c4c481ba40e2f38cb4795a117802659b0e2d6: Status 404 returned error can't find the container with id 8228fc12cfa6fa8dc25ebe73f29c4c481ba40e2f38cb4795a117802659b0e2d6 Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.216563 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkgtv\" (UniqueName: \"kubernetes.io/projected/8c0701a3-3ba4-42cc-b570-bb688909e07d-kube-api-access-hkgtv\") pod \"package-server-manager-789f6589d5-q5llk\" (UID: \"8c0701a3-3ba4-42cc-b570-bb688909e07d\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5llk" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.229600 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-hbznb" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.232444 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3f5d06e-976d-42ce-9693-bc41c2ee9154-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tm7nq\" (UID: \"c3f5d06e-976d-42ce-9693-bc41c2ee9154\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tm7nq" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.254098 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tm7nq" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.288251 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8bf1ab3c-8003-4a48-b248-30282df03e95-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.288316 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5915ea5d-0cf3-405e-9372-18cfcc5dc993-webhook-cert\") pod \"packageserver-d55dfcdfc-pnbpp\" (UID: \"5915ea5d-0cf3-405e-9372-18cfcc5dc993\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pnbpp" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.288378 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e-config\") pod \"route-controller-manager-6576b87f9c-qmrxt\" (UID: \"b9aa43b2-c924-4e9d-8c38-cf39ee922d3e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.288408 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3-service-ca-bundle\") pod \"router-default-5444994796-h4z55\" (UID: \"6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3\") " pod="openshift-ingress/router-default-5444994796-h4z55" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.288427 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/797b2165-10cf-4886-a106-7f1010672030-config\") pod \"controller-manager-879f6c89f-bl64c\" (UID: \"797b2165-10cf-4886-a106-7f1010672030\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.288455 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxltc\" (UniqueName: \"kubernetes.io/projected/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e-kube-api-access-dxltc\") pod \"route-controller-manager-6576b87f9c-qmrxt\" (UID: \"b9aa43b2-c924-4e9d-8c38-cf39ee922d3e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.288513 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e-client-ca\") pod \"route-controller-manager-6576b87f9c-qmrxt\" (UID: \"b9aa43b2-c924-4e9d-8c38-cf39ee922d3e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.288530 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ead1f1e3-5d6d-4701-8b6f-99cd842d23bc-srv-cert\") pod \"catalog-operator-68c6474976-jplqc\" (UID: \"ead1f1e3-5d6d-4701-8b6f-99cd842d23bc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jplqc" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.288550 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c2450d26-3188-41cd-bc8d-cda5368b7db2-metrics-tls\") pod \"ingress-operator-5b745b69d9-rc7jm\" (UID: \"c2450d26-3188-41cd-bc8d-cda5368b7db2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rc7jm" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.288566 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl2p8\" (UniqueName: \"kubernetes.io/projected/797b2165-10cf-4886-a106-7f1010672030-kube-api-access-fl2p8\") pod \"controller-manager-879f6c89f-bl64c\" (UID: \"797b2165-10cf-4886-a106-7f1010672030\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.288607 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e-serving-cert\") pod \"route-controller-manager-6576b87f9c-qmrxt\" (UID: \"b9aa43b2-c924-4e9d-8c38-cf39ee922d3e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.288654 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3-default-certificate\") pod \"router-default-5444994796-h4z55\" (UID: \"6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3\") " pod="openshift-ingress/router-default-5444994796-h4z55" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.288725 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e0e433c6-1184-4c3b-993a-53dd1db80f8a-etcd-ca\") pod \"etcd-operator-b45778765-tk557\" (UID: \"e0e433c6-1184-4c3b-993a-53dd1db80f8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tk557" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.288744 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rl6lc\" (UniqueName: \"kubernetes.io/projected/c2450d26-3188-41cd-bc8d-cda5368b7db2-kube-api-access-rl6lc\") pod \"ingress-operator-5b745b69d9-rc7jm\" (UID: \"c2450d26-3188-41cd-bc8d-cda5368b7db2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rc7jm" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.288794 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.288920 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8bf1ab3c-8003-4a48-b248-30282df03e95-registry-tls\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.288961 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db2c3c93-6469-4c1c-939e-426aaeabfce4-config\") pod \"console-operator-58897d9998-66gzd\" (UID: \"db2c3c93-6469-4c1c-939e-426aaeabfce4\") " pod="openshift-console-operator/console-operator-58897d9998-66gzd" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.289022 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deecabfd-701d-4737-b267-61d42cf2c52d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5xc62\" (UID: \"deecabfd-701d-4737-b267-61d42cf2c52d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5xc62" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.289053 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8bf5e004-613f-44d5-8b27-04f1e555ed88-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wtnl5\" (UID: \"8bf5e004-613f-44d5-8b27-04f1e555ed88\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wtnl5" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.289072 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac663b86-4954-4552-a9bb-a0ea8eff89ef-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wlspw\" (UID: \"ac663b86-4954-4552-a9bb-a0ea8eff89ef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wlspw" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.289123 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deecabfd-701d-4737-b267-61d42cf2c52d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5xc62\" (UID: \"deecabfd-701d-4737-b267-61d42cf2c52d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5xc62" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.289162 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2450d26-3188-41cd-bc8d-cda5368b7db2-trusted-ca\") pod \"ingress-operator-5b745b69d9-rc7jm\" (UID: \"c2450d26-3188-41cd-bc8d-cda5368b7db2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rc7jm" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.289254 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djpcq\" (UniqueName: \"kubernetes.io/projected/8bf5e004-613f-44d5-8b27-04f1e555ed88-kube-api-access-djpcq\") pod \"multus-admission-controller-857f4d67dd-wtnl5\" (UID: \"8bf5e004-613f-44d5-8b27-04f1e555ed88\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wtnl5" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.289275 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9qj8\" (UniqueName: \"kubernetes.io/projected/6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3-kube-api-access-p9qj8\") pod \"router-default-5444994796-h4z55\" (UID: \"6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3\") " pod="openshift-ingress/router-default-5444994796-h4z55" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.289327 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39e99a31-4c12-4e77-918c-a7229c6899e9-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rxdxq\" (UID: \"39e99a31-4c12-4e77-918c-a7229c6899e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rxdxq" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.289347 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8qcl\" (UniqueName: \"kubernetes.io/projected/e0e433c6-1184-4c3b-993a-53dd1db80f8a-kube-api-access-t8qcl\") pod \"etcd-operator-b45778765-tk557\" (UID: \"e0e433c6-1184-4c3b-993a-53dd1db80f8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tk557" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.289374 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlcpb\" (UniqueName: \"kubernetes.io/projected/96429a28-52a4-4465-810a-1bdfa6dee2bf-kube-api-access-wlcpb\") pod \"migrator-59844c95c7-ddxr6\" (UID: \"96429a28-52a4-4465-810a-1bdfa6dee2bf\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ddxr6" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.289399 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e0e433c6-1184-4c3b-993a-53dd1db80f8a-etcd-service-ca\") pod \"etcd-operator-b45778765-tk557\" (UID: \"e0e433c6-1184-4c3b-993a-53dd1db80f8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tk557" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.289421 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e0e433c6-1184-4c3b-993a-53dd1db80f8a-etcd-client\") pod \"etcd-operator-b45778765-tk557\" (UID: \"e0e433c6-1184-4c3b-993a-53dd1db80f8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tk557" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.289926 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/797b2165-10cf-4886-a106-7f1010672030-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bl64c\" (UID: \"797b2165-10cf-4886-a106-7f1010672030\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.289969 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5915ea5d-0cf3-405e-9372-18cfcc5dc993-apiservice-cert\") pod \"packageserver-d55dfcdfc-pnbpp\" (UID: \"5915ea5d-0cf3-405e-9372-18cfcc5dc993\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pnbpp" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.289986 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2450d26-3188-41cd-bc8d-cda5368b7db2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-rc7jm\" (UID: \"c2450d26-3188-41cd-bc8d-cda5368b7db2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rc7jm" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.290006 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5915ea5d-0cf3-405e-9372-18cfcc5dc993-tmpfs\") pod \"packageserver-d55dfcdfc-pnbpp\" (UID: \"5915ea5d-0cf3-405e-9372-18cfcc5dc993\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pnbpp" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.290038 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8bf1ab3c-8003-4a48-b248-30282df03e95-registry-certificates\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.290055 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac663b86-4954-4552-a9bb-a0ea8eff89ef-config\") pod \"kube-apiserver-operator-766d6c64bb-wlspw\" (UID: \"ac663b86-4954-4552-a9bb-a0ea8eff89ef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wlspw" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.290094 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8bf1ab3c-8003-4a48-b248-30282df03e95-bound-sa-token\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.290156 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3-stats-auth\") pod \"router-default-5444994796-h4z55\" (UID: \"6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3\") " pod="openshift-ingress/router-default-5444994796-h4z55" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.290183 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlzvg\" (UniqueName: \"kubernetes.io/projected/deecabfd-701d-4737-b267-61d42cf2c52d-kube-api-access-nlzvg\") pod \"openshift-controller-manager-operator-756b6f6bc6-5xc62\" (UID: \"deecabfd-701d-4737-b267-61d42cf2c52d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5xc62" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.290205 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39e99a31-4c12-4e77-918c-a7229c6899e9-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rxdxq\" (UID: \"39e99a31-4c12-4e77-918c-a7229c6899e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rxdxq" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.290298 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3-metrics-certs\") pod \"router-default-5444994796-h4z55\" (UID: \"6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3\") " pod="openshift-ingress/router-default-5444994796-h4z55" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.290360 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39e99a31-4c12-4e77-918c-a7229c6899e9-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rxdxq\" (UID: \"39e99a31-4c12-4e77-918c-a7229c6899e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rxdxq" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.290375 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db2c3c93-6469-4c1c-939e-426aaeabfce4-serving-cert\") pod \"console-operator-58897d9998-66gzd\" (UID: \"db2c3c93-6469-4c1c-939e-426aaeabfce4\") " pod="openshift-console-operator/console-operator-58897d9998-66gzd" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.290389 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/797b2165-10cf-4886-a106-7f1010672030-serving-cert\") pod \"controller-manager-879f6c89f-bl64c\" (UID: \"797b2165-10cf-4886-a106-7f1010672030\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.290430 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bf1ab3c-8003-4a48-b248-30282df03e95-trusted-ca\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.290466 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/797b2165-10cf-4886-a106-7f1010672030-client-ca\") pod \"controller-manager-879f6c89f-bl64c\" (UID: \"797b2165-10cf-4886-a106-7f1010672030\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.290504 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfdcp\" (UniqueName: \"kubernetes.io/projected/8bf1ab3c-8003-4a48-b248-30282df03e95-kube-api-access-nfdcp\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.290521 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0e433c6-1184-4c3b-993a-53dd1db80f8a-config\") pod \"etcd-operator-b45778765-tk557\" (UID: \"e0e433c6-1184-4c3b-993a-53dd1db80f8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tk557" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.290549 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac663b86-4954-4552-a9bb-a0ea8eff89ef-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wlspw\" (UID: \"ac663b86-4954-4552-a9bb-a0ea8eff89ef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wlspw" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.290565 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhm9r\" (UniqueName: \"kubernetes.io/projected/5915ea5d-0cf3-405e-9372-18cfcc5dc993-kube-api-access-hhm9r\") pod \"packageserver-d55dfcdfc-pnbpp\" (UID: \"5915ea5d-0cf3-405e-9372-18cfcc5dc993\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pnbpp" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.290596 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db2c3c93-6469-4c1c-939e-426aaeabfce4-trusted-ca\") pod \"console-operator-58897d9998-66gzd\" (UID: \"db2c3c93-6469-4c1c-939e-426aaeabfce4\") " pod="openshift-console-operator/console-operator-58897d9998-66gzd" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.290637 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8bf1ab3c-8003-4a48-b248-30282df03e95-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.290656 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ead1f1e3-5d6d-4701-8b6f-99cd842d23bc-profile-collector-cert\") pod \"catalog-operator-68c6474976-jplqc\" (UID: \"ead1f1e3-5d6d-4701-8b6f-99cd842d23bc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jplqc" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.290675 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnr9r\" (UniqueName: \"kubernetes.io/projected/db2c3c93-6469-4c1c-939e-426aaeabfce4-kube-api-access-fnr9r\") pod \"console-operator-58897d9998-66gzd\" (UID: \"db2c3c93-6469-4c1c-939e-426aaeabfce4\") " pod="openshift-console-operator/console-operator-58897d9998-66gzd" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.290717 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0e433c6-1184-4c3b-993a-53dd1db80f8a-serving-cert\") pod \"etcd-operator-b45778765-tk557\" (UID: \"e0e433c6-1184-4c3b-993a-53dd1db80f8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tk557" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.290738 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zv2d\" (UniqueName: \"kubernetes.io/projected/ead1f1e3-5d6d-4701-8b6f-99cd842d23bc-kube-api-access-9zv2d\") pod \"catalog-operator-68c6474976-jplqc\" (UID: \"ead1f1e3-5d6d-4701-8b6f-99cd842d23bc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jplqc" Feb 28 04:37:18 crc kubenswrapper[5014]: E0228 04:37:18.293945 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:18.793931029 +0000 UTC m=+227.464056939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.298015 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-w4sdb"] Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.301514 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kckfv" event={"ID":"e957ffda-f443-4f2b-9a8e-4e2fd41beaad","Type":"ContainerStarted","Data":"1f19732330d7280a1d88a1a1b7c380d3028fe032122dc766e8ba9a142c76edff"} Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.307119 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s68g5"] Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.313093 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5llk" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.323860 5014 generic.go:334] "Generic (PLEG): container finished" podID="0060d8e2-8ffe-4a64-9109-57cb6f97ec0e" containerID="0cf7cb674bd0765cdb85877379e1771846c2e558dbea6ba5487c8d682d994df4" exitCode=0 Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.324079 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-488hv" event={"ID":"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e","Type":"ContainerDied","Data":"0cf7cb674bd0765cdb85877379e1771846c2e558dbea6ba5487c8d682d994df4"} Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.335085 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497"] Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.335455 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wc7xs" event={"ID":"b75737b1-5468-47d8-ab73-59b1d3a174a3","Type":"ContainerStarted","Data":"9d6add1ac8cf4abe2ff8edf5476f6e2e525bf3dfa0c030675ab7f0fb71167ffa"} Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.352587 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mt6mh" event={"ID":"98477023-48bc-48a1-a641-dafcc6b08624","Type":"ContainerStarted","Data":"04492e4a236b353332dd314c49be2538e37943a603f0d7d3663c13b43b9896f9"} Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.387531 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cslg4" event={"ID":"5560e581-fc17-4214-bd6d-2f2332633891","Type":"ContainerStarted","Data":"8228fc12cfa6fa8dc25ebe73f29c4c481ba40e2f38cb4795a117802659b0e2d6"} Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.391437 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:18 crc kubenswrapper[5014]: E0228 04:37:18.391674 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:18.891651683 +0000 UTC m=+227.561777593 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.391730 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxltc\" (UniqueName: \"kubernetes.io/projected/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e-kube-api-access-dxltc\") pod \"route-controller-manager-6576b87f9c-qmrxt\" (UID: \"b9aa43b2-c924-4e9d-8c38-cf39ee922d3e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.391752 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/797b2165-10cf-4886-a106-7f1010672030-config\") pod \"controller-manager-879f6c89f-bl64c\" (UID: \"797b2165-10cf-4886-a106-7f1010672030\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.391776 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a3d454ff-aef1-427e-a572-c21562fb3659-mountpoint-dir\") pod \"csi-hostpathplugin-fxcmt\" (UID: \"a3d454ff-aef1-427e-a572-c21562fb3659\") " pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.392931 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ead1f1e3-5d6d-4701-8b6f-99cd842d23bc-srv-cert\") pod \"catalog-operator-68c6474976-jplqc\" (UID: \"ead1f1e3-5d6d-4701-8b6f-99cd842d23bc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jplqc" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393119 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c2450d26-3188-41cd-bc8d-cda5368b7db2-metrics-tls\") pod \"ingress-operator-5b745b69d9-rc7jm\" (UID: \"c2450d26-3188-41cd-bc8d-cda5368b7db2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rc7jm" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393153 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl2p8\" (UniqueName: \"kubernetes.io/projected/797b2165-10cf-4886-a106-7f1010672030-kube-api-access-fl2p8\") pod \"controller-manager-879f6c89f-bl64c\" (UID: \"797b2165-10cf-4886-a106-7f1010672030\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393184 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e-client-ca\") pod \"route-controller-manager-6576b87f9c-qmrxt\" (UID: \"b9aa43b2-c924-4e9d-8c38-cf39ee922d3e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393211 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/85352047-6877-471a-8f68-76e28f6be644-signing-key\") pod \"service-ca-9c57cc56f-vzl28\" (UID: \"85352047-6877-471a-8f68-76e28f6be644\") " pod="openshift-service-ca/service-ca-9c57cc56f-vzl28" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393235 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e-serving-cert\") pod \"route-controller-manager-6576b87f9c-qmrxt\" (UID: \"b9aa43b2-c924-4e9d-8c38-cf39ee922d3e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393292 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b60b7614-e66f-4184-b1ff-10fb0ba1ed31-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wxczw\" (UID: \"b60b7614-e66f-4184-b1ff-10fb0ba1ed31\") " pod="openshift-marketplace/marketplace-operator-79b997595-wxczw" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393337 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3-default-certificate\") pod \"router-default-5444994796-h4z55\" (UID: \"6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3\") " pod="openshift-ingress/router-default-5444994796-h4z55" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393375 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-468jw\" (UniqueName: \"kubernetes.io/projected/459eac3b-ce97-42fa-966d-47072347d2b8-kube-api-access-468jw\") pod \"service-ca-operator-777779d784-zmwn2\" (UID: \"459eac3b-ce97-42fa-966d-47072347d2b8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zmwn2" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393405 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t8fg\" (UniqueName: \"kubernetes.io/projected/85352047-6877-471a-8f68-76e28f6be644-kube-api-access-2t8fg\") pod \"service-ca-9c57cc56f-vzl28\" (UID: \"85352047-6877-471a-8f68-76e28f6be644\") " pod="openshift-service-ca/service-ca-9c57cc56f-vzl28" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393429 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e0e433c6-1184-4c3b-993a-53dd1db80f8a-etcd-ca\") pod \"etcd-operator-b45778765-tk557\" (UID: \"e0e433c6-1184-4c3b-993a-53dd1db80f8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tk557" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393453 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rl6lc\" (UniqueName: \"kubernetes.io/projected/c2450d26-3188-41cd-bc8d-cda5368b7db2-kube-api-access-rl6lc\") pod \"ingress-operator-5b745b69d9-rc7jm\" (UID: \"c2450d26-3188-41cd-bc8d-cda5368b7db2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rc7jm" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393506 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393531 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a3d454ff-aef1-427e-a572-c21562fb3659-csi-data-dir\") pod \"csi-hostpathplugin-fxcmt\" (UID: \"a3d454ff-aef1-427e-a572-c21562fb3659\") " pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393585 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8bf1ab3c-8003-4a48-b248-30282df03e95-registry-tls\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393641 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db2c3c93-6469-4c1c-939e-426aaeabfce4-config\") pod \"console-operator-58897d9998-66gzd\" (UID: \"db2c3c93-6469-4c1c-939e-426aaeabfce4\") " pod="openshift-console-operator/console-operator-58897d9998-66gzd" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393672 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8bf5e004-613f-44d5-8b27-04f1e555ed88-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wtnl5\" (UID: \"8bf5e004-613f-44d5-8b27-04f1e555ed88\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wtnl5" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393700 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac663b86-4954-4552-a9bb-a0ea8eff89ef-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wlspw\" (UID: \"ac663b86-4954-4552-a9bb-a0ea8eff89ef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wlspw" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393725 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deecabfd-701d-4737-b267-61d42cf2c52d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5xc62\" (UID: \"deecabfd-701d-4737-b267-61d42cf2c52d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5xc62" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393752 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/459eac3b-ce97-42fa-966d-47072347d2b8-config\") pod \"service-ca-operator-777779d784-zmwn2\" (UID: \"459eac3b-ce97-42fa-966d-47072347d2b8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zmwn2" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393829 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2450d26-3188-41cd-bc8d-cda5368b7db2-trusted-ca\") pod \"ingress-operator-5b745b69d9-rc7jm\" (UID: \"c2450d26-3188-41cd-bc8d-cda5368b7db2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rc7jm" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393863 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deecabfd-701d-4737-b267-61d42cf2c52d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5xc62\" (UID: \"deecabfd-701d-4737-b267-61d42cf2c52d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5xc62" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393887 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/85352047-6877-471a-8f68-76e28f6be644-signing-cabundle\") pod \"service-ca-9c57cc56f-vzl28\" (UID: \"85352047-6877-471a-8f68-76e28f6be644\") " pod="openshift-service-ca/service-ca-9c57cc56f-vzl28" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393910 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chvwb\" (UniqueName: \"kubernetes.io/projected/9d0d0159-396b-49a3-a9bd-3346a06f0556-kube-api-access-chvwb\") pod \"ingress-canary-mmkz2\" (UID: \"9d0d0159-396b-49a3-a9bd-3346a06f0556\") " pod="openshift-ingress-canary/ingress-canary-mmkz2" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393951 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djpcq\" (UniqueName: \"kubernetes.io/projected/8bf5e004-613f-44d5-8b27-04f1e555ed88-kube-api-access-djpcq\") pod \"multus-admission-controller-857f4d67dd-wtnl5\" (UID: \"8bf5e004-613f-44d5-8b27-04f1e555ed88\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wtnl5" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.393975 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9qj8\" (UniqueName: \"kubernetes.io/projected/6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3-kube-api-access-p9qj8\") pod \"router-default-5444994796-h4z55\" (UID: \"6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3\") " pod="openshift-ingress/router-default-5444994796-h4z55" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394003 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39e99a31-4c12-4e77-918c-a7229c6899e9-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rxdxq\" (UID: \"39e99a31-4c12-4e77-918c-a7229c6899e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rxdxq" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394026 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a3d454ff-aef1-427e-a572-c21562fb3659-registration-dir\") pod \"csi-hostpathplugin-fxcmt\" (UID: \"a3d454ff-aef1-427e-a572-c21562fb3659\") " pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394050 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/01f6c816-1e7c-41ee-90a2-38976e24bac8-node-bootstrap-token\") pod \"machine-config-server-tjhpt\" (UID: \"01f6c816-1e7c-41ee-90a2-38976e24bac8\") " pod="openshift-machine-config-operator/machine-config-server-tjhpt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394084 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8qcl\" (UniqueName: \"kubernetes.io/projected/e0e433c6-1184-4c3b-993a-53dd1db80f8a-kube-api-access-t8qcl\") pod \"etcd-operator-b45778765-tk557\" (UID: \"e0e433c6-1184-4c3b-993a-53dd1db80f8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tk557" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394107 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlcpb\" (UniqueName: \"kubernetes.io/projected/96429a28-52a4-4465-810a-1bdfa6dee2bf-kube-api-access-wlcpb\") pod \"migrator-59844c95c7-ddxr6\" (UID: \"96429a28-52a4-4465-810a-1bdfa6dee2bf\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ddxr6" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394129 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flcnc\" (UniqueName: \"kubernetes.io/projected/01f6c816-1e7c-41ee-90a2-38976e24bac8-kube-api-access-flcnc\") pod \"machine-config-server-tjhpt\" (UID: \"01f6c816-1e7c-41ee-90a2-38976e24bac8\") " pod="openshift-machine-config-operator/machine-config-server-tjhpt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394210 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e0e433c6-1184-4c3b-993a-53dd1db80f8a-etcd-client\") pod \"etcd-operator-b45778765-tk557\" (UID: \"e0e433c6-1184-4c3b-993a-53dd1db80f8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tk557" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394249 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/797b2165-10cf-4886-a106-7f1010672030-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bl64c\" (UID: \"797b2165-10cf-4886-a106-7f1010672030\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394286 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e0e433c6-1184-4c3b-993a-53dd1db80f8a-etcd-service-ca\") pod \"etcd-operator-b45778765-tk557\" (UID: \"e0e433c6-1184-4c3b-993a-53dd1db80f8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tk557" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394291 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/797b2165-10cf-4886-a106-7f1010672030-config\") pod \"controller-manager-879f6c89f-bl64c\" (UID: \"797b2165-10cf-4886-a106-7f1010672030\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394318 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5915ea5d-0cf3-405e-9372-18cfcc5dc993-tmpfs\") pod \"packageserver-d55dfcdfc-pnbpp\" (UID: \"5915ea5d-0cf3-405e-9372-18cfcc5dc993\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pnbpp" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394338 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5915ea5d-0cf3-405e-9372-18cfcc5dc993-apiservice-cert\") pod \"packageserver-d55dfcdfc-pnbpp\" (UID: \"5915ea5d-0cf3-405e-9372-18cfcc5dc993\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pnbpp" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394389 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2450d26-3188-41cd-bc8d-cda5368b7db2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-rc7jm\" (UID: \"c2450d26-3188-41cd-bc8d-cda5368b7db2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rc7jm" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394414 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8bf1ab3c-8003-4a48-b248-30282df03e95-registry-certificates\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394432 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac663b86-4954-4552-a9bb-a0ea8eff89ef-config\") pod \"kube-apiserver-operator-766d6c64bb-wlspw\" (UID: \"ac663b86-4954-4552-a9bb-a0ea8eff89ef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wlspw" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394483 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8bf1ab3c-8003-4a48-b248-30282df03e95-bound-sa-token\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394508 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a3d454ff-aef1-427e-a572-c21562fb3659-plugins-dir\") pod \"csi-hostpathplugin-fxcmt\" (UID: \"a3d454ff-aef1-427e-a572-c21562fb3659\") " pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394554 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3-stats-auth\") pod \"router-default-5444994796-h4z55\" (UID: \"6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3\") " pod="openshift-ingress/router-default-5444994796-h4z55" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394574 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlzvg\" (UniqueName: \"kubernetes.io/projected/deecabfd-701d-4737-b267-61d42cf2c52d-kube-api-access-nlzvg\") pod \"openshift-controller-manager-operator-756b6f6bc6-5xc62\" (UID: \"deecabfd-701d-4737-b267-61d42cf2c52d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5xc62" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394618 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39e99a31-4c12-4e77-918c-a7229c6899e9-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rxdxq\" (UID: \"39e99a31-4c12-4e77-918c-a7229c6899e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rxdxq" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394642 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw72z\" (UniqueName: \"kubernetes.io/projected/e929a2fa-f34f-4100-9d0b-45752ddba504-kube-api-access-sw72z\") pod \"dns-default-dpwrd\" (UID: \"e929a2fa-f34f-4100-9d0b-45752ddba504\") " pod="openshift-dns/dns-default-dpwrd" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394680 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e929a2fa-f34f-4100-9d0b-45752ddba504-metrics-tls\") pod \"dns-default-dpwrd\" (UID: \"e929a2fa-f34f-4100-9d0b-45752ddba504\") " pod="openshift-dns/dns-default-dpwrd" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394756 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3-metrics-certs\") pod \"router-default-5444994796-h4z55\" (UID: \"6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3\") " pod="openshift-ingress/router-default-5444994796-h4z55" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394774 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxk7d\" (UniqueName: \"kubernetes.io/projected/d84dec61-f4ef-4e0b-adb1-66694017a156-kube-api-access-hxk7d\") pod \"auto-csr-approver-29537556-wwqxk\" (UID: \"d84dec61-f4ef-4e0b-adb1-66694017a156\") " pod="openshift-infra/auto-csr-approver-29537556-wwqxk" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394828 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e929a2fa-f34f-4100-9d0b-45752ddba504-config-volume\") pod \"dns-default-dpwrd\" (UID: \"e929a2fa-f34f-4100-9d0b-45752ddba504\") " pod="openshift-dns/dns-default-dpwrd" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394850 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39e99a31-4c12-4e77-918c-a7229c6899e9-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rxdxq\" (UID: \"39e99a31-4c12-4e77-918c-a7229c6899e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rxdxq" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394875 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db2c3c93-6469-4c1c-939e-426aaeabfce4-serving-cert\") pod \"console-operator-58897d9998-66gzd\" (UID: \"db2c3c93-6469-4c1c-939e-426aaeabfce4\") " pod="openshift-console-operator/console-operator-58897d9998-66gzd" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394892 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/797b2165-10cf-4886-a106-7f1010672030-serving-cert\") pod \"controller-manager-879f6c89f-bl64c\" (UID: \"797b2165-10cf-4886-a106-7f1010672030\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" Feb 28 04:37:18 crc kubenswrapper[5014]: E0228 04:37:18.394910 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:18.894895215 +0000 UTC m=+227.565021125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394935 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bf1ab3c-8003-4a48-b248-30282df03e95-trusted-ca\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.394974 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/797b2165-10cf-4886-a106-7f1010672030-client-ca\") pod \"controller-manager-879f6c89f-bl64c\" (UID: \"797b2165-10cf-4886-a106-7f1010672030\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.395027 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfdcp\" (UniqueName: \"kubernetes.io/projected/8bf1ab3c-8003-4a48-b248-30282df03e95-kube-api-access-nfdcp\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.395064 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0e433c6-1184-4c3b-993a-53dd1db80f8a-config\") pod \"etcd-operator-b45778765-tk557\" (UID: \"e0e433c6-1184-4c3b-993a-53dd1db80f8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tk557" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.395085 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/01f6c816-1e7c-41ee-90a2-38976e24bac8-certs\") pod \"machine-config-server-tjhpt\" (UID: \"01f6c816-1e7c-41ee-90a2-38976e24bac8\") " pod="openshift-machine-config-operator/machine-config-server-tjhpt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.395105 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac663b86-4954-4552-a9bb-a0ea8eff89ef-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wlspw\" (UID: \"ac663b86-4954-4552-a9bb-a0ea8eff89ef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wlspw" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.395124 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhm9r\" (UniqueName: \"kubernetes.io/projected/5915ea5d-0cf3-405e-9372-18cfcc5dc993-kube-api-access-hhm9r\") pod \"packageserver-d55dfcdfc-pnbpp\" (UID: \"5915ea5d-0cf3-405e-9372-18cfcc5dc993\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pnbpp" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.395146 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db2c3c93-6469-4c1c-939e-426aaeabfce4-trusted-ca\") pod \"console-operator-58897d9998-66gzd\" (UID: \"db2c3c93-6469-4c1c-939e-426aaeabfce4\") " pod="openshift-console-operator/console-operator-58897d9998-66gzd" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.395174 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8bf1ab3c-8003-4a48-b248-30282df03e95-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.395203 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ead1f1e3-5d6d-4701-8b6f-99cd842d23bc-profile-collector-cert\") pod \"catalog-operator-68c6474976-jplqc\" (UID: \"ead1f1e3-5d6d-4701-8b6f-99cd842d23bc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jplqc" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.395229 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a3d454ff-aef1-427e-a572-c21562fb3659-socket-dir\") pod \"csi-hostpathplugin-fxcmt\" (UID: \"a3d454ff-aef1-427e-a572-c21562fb3659\") " pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.395255 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnr9r\" (UniqueName: \"kubernetes.io/projected/db2c3c93-6469-4c1c-939e-426aaeabfce4-kube-api-access-fnr9r\") pod \"console-operator-58897d9998-66gzd\" (UID: \"db2c3c93-6469-4c1c-939e-426aaeabfce4\") " pod="openshift-console-operator/console-operator-58897d9998-66gzd" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.395279 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/459eac3b-ce97-42fa-966d-47072347d2b8-serving-cert\") pod \"service-ca-operator-777779d784-zmwn2\" (UID: \"459eac3b-ce97-42fa-966d-47072347d2b8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zmwn2" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.395337 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0e433c6-1184-4c3b-993a-53dd1db80f8a-serving-cert\") pod \"etcd-operator-b45778765-tk557\" (UID: \"e0e433c6-1184-4c3b-993a-53dd1db80f8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tk557" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.395623 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zv2d\" (UniqueName: \"kubernetes.io/projected/ead1f1e3-5d6d-4701-8b6f-99cd842d23bc-kube-api-access-9zv2d\") pod \"catalog-operator-68c6474976-jplqc\" (UID: \"ead1f1e3-5d6d-4701-8b6f-99cd842d23bc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jplqc" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.395654 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8bf1ab3c-8003-4a48-b248-30282df03e95-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.395691 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5915ea5d-0cf3-405e-9372-18cfcc5dc993-webhook-cert\") pod \"packageserver-d55dfcdfc-pnbpp\" (UID: \"5915ea5d-0cf3-405e-9372-18cfcc5dc993\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pnbpp" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.395709 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b60b7614-e66f-4184-b1ff-10fb0ba1ed31-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wxczw\" (UID: \"b60b7614-e66f-4184-b1ff-10fb0ba1ed31\") " pod="openshift-marketplace/marketplace-operator-79b997595-wxczw" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.395731 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vckt4\" (UniqueName: \"kubernetes.io/projected/b60b7614-e66f-4184-b1ff-10fb0ba1ed31-kube-api-access-vckt4\") pod \"marketplace-operator-79b997595-wxczw\" (UID: \"b60b7614-e66f-4184-b1ff-10fb0ba1ed31\") " pod="openshift-marketplace/marketplace-operator-79b997595-wxczw" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.395756 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhnnf\" (UniqueName: \"kubernetes.io/projected/a3d454ff-aef1-427e-a572-c21562fb3659-kube-api-access-vhnnf\") pod \"csi-hostpathplugin-fxcmt\" (UID: \"a3d454ff-aef1-427e-a572-c21562fb3659\") " pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.395772 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d0d0159-396b-49a3-a9bd-3346a06f0556-cert\") pod \"ingress-canary-mmkz2\" (UID: \"9d0d0159-396b-49a3-a9bd-3346a06f0556\") " pod="openshift-ingress-canary/ingress-canary-mmkz2" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.395792 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3-service-ca-bundle\") pod \"router-default-5444994796-h4z55\" (UID: \"6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3\") " pod="openshift-ingress/router-default-5444994796-h4z55" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.395826 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e-config\") pod \"route-controller-manager-6576b87f9c-qmrxt\" (UID: \"b9aa43b2-c924-4e9d-8c38-cf39ee922d3e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.396408 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c2450d26-3188-41cd-bc8d-cda5368b7db2-trusted-ca\") pod \"ingress-operator-5b745b69d9-rc7jm\" (UID: \"c2450d26-3188-41cd-bc8d-cda5368b7db2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rc7jm" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.398606 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e-client-ca\") pod \"route-controller-manager-6576b87f9c-qmrxt\" (UID: \"b9aa43b2-c924-4e9d-8c38-cf39ee922d3e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.399126 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e-config\") pod \"route-controller-manager-6576b87f9c-qmrxt\" (UID: \"b9aa43b2-c924-4e9d-8c38-cf39ee922d3e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.399998 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/5915ea5d-0cf3-405e-9372-18cfcc5dc993-tmpfs\") pod \"packageserver-d55dfcdfc-pnbpp\" (UID: \"5915ea5d-0cf3-405e-9372-18cfcc5dc993\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pnbpp" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.400584 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/e0e433c6-1184-4c3b-993a-53dd1db80f8a-etcd-ca\") pod \"etcd-operator-b45778765-tk557\" (UID: \"e0e433c6-1184-4c3b-993a-53dd1db80f8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tk557" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.401630 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3-stats-auth\") pod \"router-default-5444994796-h4z55\" (UID: \"6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3\") " pod="openshift-ingress/router-default-5444994796-h4z55" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.405488 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8bf1ab3c-8003-4a48-b248-30282df03e95-ca-trust-extracted\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.406289 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39e99a31-4c12-4e77-918c-a7229c6899e9-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rxdxq\" (UID: \"39e99a31-4c12-4e77-918c-a7229c6899e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rxdxq" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.409693 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bf1ab3c-8003-4a48-b248-30282df03e95-trusted-ca\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.410150 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3-service-ca-bundle\") pod \"router-default-5444994796-h4z55\" (UID: \"6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3\") " pod="openshift-ingress/router-default-5444994796-h4z55" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.410339 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3-metrics-certs\") pod \"router-default-5444994796-h4z55\" (UID: \"6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3\") " pod="openshift-ingress/router-default-5444994796-h4z55" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.411241 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/797b2165-10cf-4886-a106-7f1010672030-client-ca\") pod \"controller-manager-879f6c89f-bl64c\" (UID: \"797b2165-10cf-4886-a106-7f1010672030\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.411725 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/797b2165-10cf-4886-a106-7f1010672030-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-bl64c\" (UID: \"797b2165-10cf-4886-a106-7f1010672030\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.412579 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deecabfd-701d-4737-b267-61d42cf2c52d-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-5xc62\" (UID: \"deecabfd-701d-4737-b267-61d42cf2c52d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5xc62" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.412618 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac663b86-4954-4552-a9bb-a0ea8eff89ef-config\") pod \"kube-apiserver-operator-766d6c64bb-wlspw\" (UID: \"ac663b86-4954-4552-a9bb-a0ea8eff89ef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wlspw" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.412948 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/e0e433c6-1184-4c3b-993a-53dd1db80f8a-etcd-service-ca\") pod \"etcd-operator-b45778765-tk557\" (UID: \"e0e433c6-1184-4c3b-993a-53dd1db80f8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tk557" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.413108 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0e433c6-1184-4c3b-993a-53dd1db80f8a-config\") pod \"etcd-operator-b45778765-tk557\" (UID: \"e0e433c6-1184-4c3b-993a-53dd1db80f8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tk557" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.415264 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ead1f1e3-5d6d-4701-8b6f-99cd842d23bc-srv-cert\") pod \"catalog-operator-68c6474976-jplqc\" (UID: \"ead1f1e3-5d6d-4701-8b6f-99cd842d23bc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jplqc" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.416368 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8bf1ab3c-8003-4a48-b248-30282df03e95-registry-certificates\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.416422 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/797b2165-10cf-4886-a106-7f1010672030-serving-cert\") pod \"controller-manager-879f6c89f-bl64c\" (UID: \"797b2165-10cf-4886-a106-7f1010672030\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.430865 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ead1f1e3-5d6d-4701-8b6f-99cd842d23bc-profile-collector-cert\") pod \"catalog-operator-68c6474976-jplqc\" (UID: \"ead1f1e3-5d6d-4701-8b6f-99cd842d23bc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jplqc" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.432115 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8bf5e004-613f-44d5-8b27-04f1e555ed88-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wtnl5\" (UID: \"8bf5e004-613f-44d5-8b27-04f1e555ed88\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wtnl5" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.432856 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5915ea5d-0cf3-405e-9372-18cfcc5dc993-apiservice-cert\") pod \"packageserver-d55dfcdfc-pnbpp\" (UID: \"5915ea5d-0cf3-405e-9372-18cfcc5dc993\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pnbpp" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.440255 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5915ea5d-0cf3-405e-9372-18cfcc5dc993-webhook-cert\") pod \"packageserver-d55dfcdfc-pnbpp\" (UID: \"5915ea5d-0cf3-405e-9372-18cfcc5dc993\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pnbpp" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.442485 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/e0e433c6-1184-4c3b-993a-53dd1db80f8a-etcd-client\") pod \"etcd-operator-b45778765-tk557\" (UID: \"e0e433c6-1184-4c3b-993a-53dd1db80f8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tk557" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.444602 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8bf1ab3c-8003-4a48-b248-30282df03e95-installation-pull-secrets\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.444692 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8bf1ab3c-8003-4a48-b248-30282df03e95-registry-tls\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.444853 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac663b86-4954-4552-a9bb-a0ea8eff89ef-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-wlspw\" (UID: \"ac663b86-4954-4552-a9bb-a0ea8eff89ef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wlspw" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.445742 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c2450d26-3188-41cd-bc8d-cda5368b7db2-metrics-tls\") pod \"ingress-operator-5b745b69d9-rc7jm\" (UID: \"c2450d26-3188-41cd-bc8d-cda5368b7db2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rc7jm" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.446333 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e0e433c6-1184-4c3b-993a-53dd1db80f8a-serving-cert\") pod \"etcd-operator-b45778765-tk557\" (UID: \"e0e433c6-1184-4c3b-993a-53dd1db80f8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tk557" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.447719 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3-default-certificate\") pod \"router-default-5444994796-h4z55\" (UID: \"6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3\") " pod="openshift-ingress/router-default-5444994796-h4z55" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.458164 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39e99a31-4c12-4e77-918c-a7229c6899e9-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rxdxq\" (UID: \"39e99a31-4c12-4e77-918c-a7229c6899e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rxdxq" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.467110 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e-serving-cert\") pod \"route-controller-manager-6576b87f9c-qmrxt\" (UID: \"b9aa43b2-c924-4e9d-8c38-cf39ee922d3e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.468445 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/deecabfd-701d-4737-b267-61d42cf2c52d-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-5xc62\" (UID: \"deecabfd-701d-4737-b267-61d42cf2c52d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5xc62" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.479114 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x5xfl" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.480928 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db2c3c93-6469-4c1c-939e-426aaeabfce4-config\") pod \"console-operator-58897d9998-66gzd\" (UID: \"db2c3c93-6469-4c1c-939e-426aaeabfce4\") " pod="openshift-console-operator/console-operator-58897d9998-66gzd" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.482750 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rl6lc\" (UniqueName: \"kubernetes.io/projected/c2450d26-3188-41cd-bc8d-cda5368b7db2-kube-api-access-rl6lc\") pod \"ingress-operator-5b745b69d9-rc7jm\" (UID: \"c2450d26-3188-41cd-bc8d-cda5368b7db2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rc7jm" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.482856 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjr6k"] Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.485627 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-cghpw"] Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.489928 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db2c3c93-6469-4c1c-939e-426aaeabfce4-trusted-ca\") pod \"console-operator-58897d9998-66gzd\" (UID: \"db2c3c93-6469-4c1c-939e-426aaeabfce4\") " pod="openshift-console-operator/console-operator-58897d9998-66gzd" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.491421 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxltc\" (UniqueName: \"kubernetes.io/projected/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e-kube-api-access-dxltc\") pod \"route-controller-manager-6576b87f9c-qmrxt\" (UID: \"b9aa43b2-c924-4e9d-8c38-cf39ee922d3e\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.491469 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db2c3c93-6469-4c1c-939e-426aaeabfce4-serving-cert\") pod \"console-operator-58897d9998-66gzd\" (UID: \"db2c3c93-6469-4c1c-939e-426aaeabfce4\") " pod="openshift-console-operator/console-operator-58897d9998-66gzd" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.492162 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/39e99a31-4c12-4e77-918c-a7229c6899e9-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-rxdxq\" (UID: \"39e99a31-4c12-4e77-918c-a7229c6899e9\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rxdxq" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.497433 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8qcl\" (UniqueName: \"kubernetes.io/projected/e0e433c6-1184-4c3b-993a-53dd1db80f8a-kube-api-access-t8qcl\") pod \"etcd-operator-b45778765-tk557\" (UID: \"e0e433c6-1184-4c3b-993a-53dd1db80f8a\") " pod="openshift-etcd-operator/etcd-operator-b45778765-tk557" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.497948 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:18 crc kubenswrapper[5014]: E0228 04:37:18.498161 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:18.998128776 +0000 UTC m=+227.668254686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.498223 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a3d454ff-aef1-427e-a572-c21562fb3659-socket-dir\") pod \"csi-hostpathplugin-fxcmt\" (UID: \"a3d454ff-aef1-427e-a572-c21562fb3659\") " pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.498292 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/459eac3b-ce97-42fa-966d-47072347d2b8-serving-cert\") pod \"service-ca-operator-777779d784-zmwn2\" (UID: \"459eac3b-ce97-42fa-966d-47072347d2b8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zmwn2" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.498362 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b60b7614-e66f-4184-b1ff-10fb0ba1ed31-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wxczw\" (UID: \"b60b7614-e66f-4184-b1ff-10fb0ba1ed31\") " pod="openshift-marketplace/marketplace-operator-79b997595-wxczw" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.498388 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vckt4\" (UniqueName: \"kubernetes.io/projected/b60b7614-e66f-4184-b1ff-10fb0ba1ed31-kube-api-access-vckt4\") pod \"marketplace-operator-79b997595-wxczw\" (UID: \"b60b7614-e66f-4184-b1ff-10fb0ba1ed31\") " pod="openshift-marketplace/marketplace-operator-79b997595-wxczw" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.498411 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhnnf\" (UniqueName: \"kubernetes.io/projected/a3d454ff-aef1-427e-a572-c21562fb3659-kube-api-access-vhnnf\") pod \"csi-hostpathplugin-fxcmt\" (UID: \"a3d454ff-aef1-427e-a572-c21562fb3659\") " pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.498434 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d0d0159-396b-49a3-a9bd-3346a06f0556-cert\") pod \"ingress-canary-mmkz2\" (UID: \"9d0d0159-396b-49a3-a9bd-3346a06f0556\") " pod="openshift-ingress-canary/ingress-canary-mmkz2" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.498502 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a3d454ff-aef1-427e-a572-c21562fb3659-mountpoint-dir\") pod \"csi-hostpathplugin-fxcmt\" (UID: \"a3d454ff-aef1-427e-a572-c21562fb3659\") " pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.498540 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/85352047-6877-471a-8f68-76e28f6be644-signing-key\") pod \"service-ca-9c57cc56f-vzl28\" (UID: \"85352047-6877-471a-8f68-76e28f6be644\") " pod="openshift-service-ca/service-ca-9c57cc56f-vzl28" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.498586 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b60b7614-e66f-4184-b1ff-10fb0ba1ed31-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wxczw\" (UID: \"b60b7614-e66f-4184-b1ff-10fb0ba1ed31\") " pod="openshift-marketplace/marketplace-operator-79b997595-wxczw" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.498710 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-468jw\" (UniqueName: \"kubernetes.io/projected/459eac3b-ce97-42fa-966d-47072347d2b8-kube-api-access-468jw\") pod \"service-ca-operator-777779d784-zmwn2\" (UID: \"459eac3b-ce97-42fa-966d-47072347d2b8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zmwn2" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.498745 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t8fg\" (UniqueName: \"kubernetes.io/projected/85352047-6877-471a-8f68-76e28f6be644-kube-api-access-2t8fg\") pod \"service-ca-9c57cc56f-vzl28\" (UID: \"85352047-6877-471a-8f68-76e28f6be644\") " pod="openshift-service-ca/service-ca-9c57cc56f-vzl28" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.498783 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.498819 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a3d454ff-aef1-427e-a572-c21562fb3659-csi-data-dir\") pod \"csi-hostpathplugin-fxcmt\" (UID: \"a3d454ff-aef1-427e-a572-c21562fb3659\") " pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.498860 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/459eac3b-ce97-42fa-966d-47072347d2b8-config\") pod \"service-ca-operator-777779d784-zmwn2\" (UID: \"459eac3b-ce97-42fa-966d-47072347d2b8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zmwn2" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.498890 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/85352047-6877-471a-8f68-76e28f6be644-signing-cabundle\") pod \"service-ca-9c57cc56f-vzl28\" (UID: \"85352047-6877-471a-8f68-76e28f6be644\") " pod="openshift-service-ca/service-ca-9c57cc56f-vzl28" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.498910 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chvwb\" (UniqueName: \"kubernetes.io/projected/9d0d0159-396b-49a3-a9bd-3346a06f0556-kube-api-access-chvwb\") pod \"ingress-canary-mmkz2\" (UID: \"9d0d0159-396b-49a3-a9bd-3346a06f0556\") " pod="openshift-ingress-canary/ingress-canary-mmkz2" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.498953 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a3d454ff-aef1-427e-a572-c21562fb3659-registration-dir\") pod \"csi-hostpathplugin-fxcmt\" (UID: \"a3d454ff-aef1-427e-a572-c21562fb3659\") " pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.498978 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/01f6c816-1e7c-41ee-90a2-38976e24bac8-node-bootstrap-token\") pod \"machine-config-server-tjhpt\" (UID: \"01f6c816-1e7c-41ee-90a2-38976e24bac8\") " pod="openshift-machine-config-operator/machine-config-server-tjhpt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.499016 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flcnc\" (UniqueName: \"kubernetes.io/projected/01f6c816-1e7c-41ee-90a2-38976e24bac8-kube-api-access-flcnc\") pod \"machine-config-server-tjhpt\" (UID: \"01f6c816-1e7c-41ee-90a2-38976e24bac8\") " pod="openshift-machine-config-operator/machine-config-server-tjhpt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.499098 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a3d454ff-aef1-427e-a572-c21562fb3659-plugins-dir\") pod \"csi-hostpathplugin-fxcmt\" (UID: \"a3d454ff-aef1-427e-a572-c21562fb3659\") " pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.499153 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw72z\" (UniqueName: \"kubernetes.io/projected/e929a2fa-f34f-4100-9d0b-45752ddba504-kube-api-access-sw72z\") pod \"dns-default-dpwrd\" (UID: \"e929a2fa-f34f-4100-9d0b-45752ddba504\") " pod="openshift-dns/dns-default-dpwrd" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.499186 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e929a2fa-f34f-4100-9d0b-45752ddba504-metrics-tls\") pod \"dns-default-dpwrd\" (UID: \"e929a2fa-f34f-4100-9d0b-45752ddba504\") " pod="openshift-dns/dns-default-dpwrd" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.499245 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxk7d\" (UniqueName: \"kubernetes.io/projected/d84dec61-f4ef-4e0b-adb1-66694017a156-kube-api-access-hxk7d\") pod \"auto-csr-approver-29537556-wwqxk\" (UID: \"d84dec61-f4ef-4e0b-adb1-66694017a156\") " pod="openshift-infra/auto-csr-approver-29537556-wwqxk" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.499276 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e929a2fa-f34f-4100-9d0b-45752ddba504-config-volume\") pod \"dns-default-dpwrd\" (UID: \"e929a2fa-f34f-4100-9d0b-45752ddba504\") " pod="openshift-dns/dns-default-dpwrd" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.499332 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/01f6c816-1e7c-41ee-90a2-38976e24bac8-certs\") pod \"machine-config-server-tjhpt\" (UID: \"01f6c816-1e7c-41ee-90a2-38976e24bac8\") " pod="openshift-machine-config-operator/machine-config-server-tjhpt" Feb 28 04:37:18 crc kubenswrapper[5014]: E0228 04:37:18.500224 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:19.000205535 +0000 UTC m=+227.670331445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.500765 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a3d454ff-aef1-427e-a572-c21562fb3659-socket-dir\") pod \"csi-hostpathplugin-fxcmt\" (UID: \"a3d454ff-aef1-427e-a572-c21562fb3659\") " pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.501400 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a3d454ff-aef1-427e-a572-c21562fb3659-mountpoint-dir\") pod \"csi-hostpathplugin-fxcmt\" (UID: \"a3d454ff-aef1-427e-a572-c21562fb3659\") " pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.502249 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a3d454ff-aef1-427e-a572-c21562fb3659-csi-data-dir\") pod \"csi-hostpathplugin-fxcmt\" (UID: \"a3d454ff-aef1-427e-a572-c21562fb3659\") " pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.504546 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/01f6c816-1e7c-41ee-90a2-38976e24bac8-certs\") pod \"machine-config-server-tjhpt\" (UID: \"01f6c816-1e7c-41ee-90a2-38976e24bac8\") " pod="openshift-machine-config-operator/machine-config-server-tjhpt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.504639 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a3d454ff-aef1-427e-a572-c21562fb3659-plugins-dir\") pod \"csi-hostpathplugin-fxcmt\" (UID: \"a3d454ff-aef1-427e-a572-c21562fb3659\") " pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.506103 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/9d0d0159-396b-49a3-a9bd-3346a06f0556-cert\") pod \"ingress-canary-mmkz2\" (UID: \"9d0d0159-396b-49a3-a9bd-3346a06f0556\") " pod="openshift-ingress-canary/ingress-canary-mmkz2" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.506247 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/85352047-6877-471a-8f68-76e28f6be644-signing-cabundle\") pod \"service-ca-9c57cc56f-vzl28\" (UID: \"85352047-6877-471a-8f68-76e28f6be644\") " pod="openshift-service-ca/service-ca-9c57cc56f-vzl28" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.506711 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a3d454ff-aef1-427e-a572-c21562fb3659-registration-dir\") pod \"csi-hostpathplugin-fxcmt\" (UID: \"a3d454ff-aef1-427e-a572-c21562fb3659\") " pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.506917 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/459eac3b-ce97-42fa-966d-47072347d2b8-config\") pod \"service-ca-operator-777779d784-zmwn2\" (UID: \"459eac3b-ce97-42fa-966d-47072347d2b8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zmwn2" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.508633 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/85352047-6877-471a-8f68-76e28f6be644-signing-key\") pod \"service-ca-9c57cc56f-vzl28\" (UID: \"85352047-6877-471a-8f68-76e28f6be644\") " pod="openshift-service-ca/service-ca-9c57cc56f-vzl28" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.508959 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/459eac3b-ce97-42fa-966d-47072347d2b8-serving-cert\") pod \"service-ca-operator-777779d784-zmwn2\" (UID: \"459eac3b-ce97-42fa-966d-47072347d2b8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zmwn2" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.509088 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e929a2fa-f34f-4100-9d0b-45752ddba504-config-volume\") pod \"dns-default-dpwrd\" (UID: \"e929a2fa-f34f-4100-9d0b-45752ddba504\") " pod="openshift-dns/dns-default-dpwrd" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.514154 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlcpb\" (UniqueName: \"kubernetes.io/projected/96429a28-52a4-4465-810a-1bdfa6dee2bf-kube-api-access-wlcpb\") pod \"migrator-59844c95c7-ddxr6\" (UID: \"96429a28-52a4-4465-810a-1bdfa6dee2bf\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ddxr6" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.516716 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rxdxq" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.517494 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b60b7614-e66f-4184-b1ff-10fb0ba1ed31-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wxczw\" (UID: \"b60b7614-e66f-4184-b1ff-10fb0ba1ed31\") " pod="openshift-marketplace/marketplace-operator-79b997595-wxczw" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.525078 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b60b7614-e66f-4184-b1ff-10fb0ba1ed31-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wxczw\" (UID: \"b60b7614-e66f-4184-b1ff-10fb0ba1ed31\") " pod="openshift-marketplace/marketplace-operator-79b997595-wxczw" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.526042 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/01f6c816-1e7c-41ee-90a2-38976e24bac8-node-bootstrap-token\") pod \"machine-config-server-tjhpt\" (UID: \"01f6c816-1e7c-41ee-90a2-38976e24bac8\") " pod="openshift-machine-config-operator/machine-config-server-tjhpt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.534690 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e929a2fa-f34f-4100-9d0b-45752ddba504-metrics-tls\") pod \"dns-default-dpwrd\" (UID: \"e929a2fa-f34f-4100-9d0b-45752ddba504\") " pod="openshift-dns/dns-default-dpwrd" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.553562 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl2p8\" (UniqueName: \"kubernetes.io/projected/797b2165-10cf-4886-a106-7f1010672030-kube-api-access-fl2p8\") pod \"controller-manager-879f6c89f-bl64c\" (UID: \"797b2165-10cf-4886-a106-7f1010672030\") " pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.557819 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhm9r\" (UniqueName: \"kubernetes.io/projected/5915ea5d-0cf3-405e-9372-18cfcc5dc993-kube-api-access-hhm9r\") pod \"packageserver-d55dfcdfc-pnbpp\" (UID: \"5915ea5d-0cf3-405e-9372-18cfcc5dc993\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pnbpp" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.560300 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-tk557" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.579267 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pnbpp" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.596383 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac663b86-4954-4552-a9bb-a0ea8eff89ef-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-wlspw\" (UID: \"ac663b86-4954-4552-a9bb-a0ea8eff89ef\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wlspw" Feb 28 04:37:18 crc kubenswrapper[5014]: W0228 04:37:18.596770 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87307a00_6574_43d3_b6d8_5b5ee80ce95a.slice/crio-d0fb284c2dda3ce033095214bbd115071a953ccc1730cac12e025d56bf0c1b2f WatchSource:0}: Error finding container d0fb284c2dda3ce033095214bbd115071a953ccc1730cac12e025d56bf0c1b2f: Status 404 returned error can't find the container with id d0fb284c2dda3ce033095214bbd115071a953ccc1730cac12e025d56bf0c1b2f Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.599779 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.601860 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:18 crc kubenswrapper[5014]: E0228 04:37:18.602375 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:19.102353324 +0000 UTC m=+227.772479234 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.629873 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9qj8\" (UniqueName: \"kubernetes.io/projected/6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3-kube-api-access-p9qj8\") pod \"router-default-5444994796-h4z55\" (UID: \"6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3\") " pod="openshift-ingress/router-default-5444994796-h4z55" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.632051 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z87qr"] Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.634565 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-n8xpb"] Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.634709 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnr9r\" (UniqueName: \"kubernetes.io/projected/db2c3c93-6469-4c1c-939e-426aaeabfce4-kube-api-access-fnr9r\") pod \"console-operator-58897d9998-66gzd\" (UID: \"db2c3c93-6469-4c1c-939e-426aaeabfce4\") " pod="openshift-console-operator/console-operator-58897d9998-66gzd" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.643012 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kct58"] Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.647733 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfdcp\" (UniqueName: \"kubernetes.io/projected/8bf1ab3c-8003-4a48-b248-30282df03e95-kube-api-access-nfdcp\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.683826 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8bf1ab3c-8003-4a48-b248-30282df03e95-bound-sa-token\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.684494 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c2450d26-3188-41cd-bc8d-cda5368b7db2-bound-sa-token\") pod \"ingress-operator-5b745b69d9-rc7jm\" (UID: \"c2450d26-3188-41cd-bc8d-cda5368b7db2\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rc7jm" Feb 28 04:37:18 crc kubenswrapper[5014]: W0228 04:37:18.692571 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bf7d4c6_1fd5_4fa4_a7a3_bf5af08d7eba.slice/crio-8d3aedcb7ca6c9c18c08441e6c7348faecd93e18de6734889531b1ef18af158a WatchSource:0}: Error finding container 8d3aedcb7ca6c9c18c08441e6c7348faecd93e18de6734889531b1ef18af158a: Status 404 returned error can't find the container with id 8d3aedcb7ca6c9c18c08441e6c7348faecd93e18de6734889531b1ef18af158a Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.708652 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djpcq\" (UniqueName: \"kubernetes.io/projected/8bf5e004-613f-44d5-8b27-04f1e555ed88-kube-api-access-djpcq\") pod \"multus-admission-controller-857f4d67dd-wtnl5\" (UID: \"8bf5e004-613f-44d5-8b27-04f1e555ed88\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wtnl5" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.711639 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: E0228 04:37:18.715125 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:19.215098526 +0000 UTC m=+227.885224436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.724723 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlzvg\" (UniqueName: \"kubernetes.io/projected/deecabfd-701d-4737-b267-61d42cf2c52d-kube-api-access-nlzvg\") pod \"openshift-controller-manager-operator-756b6f6bc6-5xc62\" (UID: \"deecabfd-701d-4737-b267-61d42cf2c52d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5xc62" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.728871 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fkqnd"] Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.739556 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-66gzd" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.747436 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2rth"] Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.757281 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zv2d\" (UniqueName: \"kubernetes.io/projected/ead1f1e3-5d6d-4701-8b6f-99cd842d23bc-kube-api-access-9zv2d\") pod \"catalog-operator-68c6474976-jplqc\" (UID: \"ead1f1e3-5d6d-4701-8b6f-99cd842d23bc\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jplqc" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.769751 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhnnf\" (UniqueName: \"kubernetes.io/projected/a3d454ff-aef1-427e-a572-c21562fb3659-kube-api-access-vhnnf\") pod \"csi-hostpathplugin-fxcmt\" (UID: \"a3d454ff-aef1-427e-a572-c21562fb3659\") " pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.775998 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.780048 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5xc62" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.797960 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vckt4\" (UniqueName: \"kubernetes.io/projected/b60b7614-e66f-4184-b1ff-10fb0ba1ed31-kube-api-access-vckt4\") pod \"marketplace-operator-79b997595-wxczw\" (UID: \"b60b7614-e66f-4184-b1ff-10fb0ba1ed31\") " pod="openshift-marketplace/marketplace-operator-79b997595-wxczw" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.805309 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ddxr6" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.812473 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:18 crc kubenswrapper[5014]: E0228 04:37:18.812672 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:19.312644624 +0000 UTC m=+227.982770534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.812857 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:18 crc kubenswrapper[5014]: E0228 04:37:18.813280 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:19.313264502 +0000 UTC m=+227.983390402 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.813505 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flcnc\" (UniqueName: \"kubernetes.io/projected/01f6c816-1e7c-41ee-90a2-38976e24bac8-kube-api-access-flcnc\") pod \"machine-config-server-tjhpt\" (UID: \"01f6c816-1e7c-41ee-90a2-38976e24bac8\") " pod="openshift-machine-config-operator/machine-config-server-tjhpt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.828992 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.832508 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chvwb\" (UniqueName: \"kubernetes.io/projected/9d0d0159-396b-49a3-a9bd-3346a06f0556-kube-api-access-chvwb\") pod \"ingress-canary-mmkz2\" (UID: \"9d0d0159-396b-49a3-a9bd-3346a06f0556\") " pod="openshift-ingress-canary/ingress-canary-mmkz2" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.838446 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-mmkz2" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.840188 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tm7nq"] Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.849073 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537550-hbznb"] Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.854881 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw72z\" (UniqueName: \"kubernetes.io/projected/e929a2fa-f34f-4100-9d0b-45752ddba504-kube-api-access-sw72z\") pod \"dns-default-dpwrd\" (UID: \"e929a2fa-f34f-4100-9d0b-45752ddba504\") " pod="openshift-dns/dns-default-dpwrd" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.866504 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxk7d\" (UniqueName: \"kubernetes.io/projected/d84dec61-f4ef-4e0b-adb1-66694017a156-kube-api-access-hxk7d\") pod \"auto-csr-approver-29537556-wwqxk\" (UID: \"d84dec61-f4ef-4e0b-adb1-66694017a156\") " pod="openshift-infra/auto-csr-approver-29537556-wwqxk" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.868164 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wlspw" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.891827 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zlklt"] Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.897376 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t8fg\" (UniqueName: \"kubernetes.io/projected/85352047-6877-471a-8f68-76e28f6be644-kube-api-access-2t8fg\") pod \"service-ca-9c57cc56f-vzl28\" (UID: \"85352047-6877-471a-8f68-76e28f6be644\") " pod="openshift-service-ca/service-ca-9c57cc56f-vzl28" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.910660 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-468jw\" (UniqueName: \"kubernetes.io/projected/459eac3b-ce97-42fa-966d-47072347d2b8-kube-api-access-468jw\") pod \"service-ca-operator-777779d784-zmwn2\" (UID: \"459eac3b-ce97-42fa-966d-47072347d2b8\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-zmwn2" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.914706 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:18 crc kubenswrapper[5014]: E0228 04:37:18.916180 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:19.416055589 +0000 UTC m=+228.086181499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.924147 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h4z55" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.934078 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jplqc" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.943538 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wxczw" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.960794 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wtnl5" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.984632 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rc7jm" Feb 28 04:37:18 crc kubenswrapper[5014]: I0228 04:37:18.985415 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5llk"] Feb 28 04:37:19 crc kubenswrapper[5014]: E0228 04:37:19.017119 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:19.517101888 +0000 UTC m=+228.187227798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.017162 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.020934 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zmwn2" Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.045735 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vzl28" Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.070830 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537556-wwqxk" Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.095848 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-dpwrd" Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.108831 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tjhpt" Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.118682 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:19 crc kubenswrapper[5014]: E0228 04:37:19.119047 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:19.619024862 +0000 UTC m=+228.289150772 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.219998 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:19 crc kubenswrapper[5014]: E0228 04:37:19.220548 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:19.720529373 +0000 UTC m=+228.390655273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.307905 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pnbpp"] Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.323059 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:19 crc kubenswrapper[5014]: E0228 04:37:19.323199 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:19.823172137 +0000 UTC m=+228.493298047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.323341 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:19 crc kubenswrapper[5014]: E0228 04:37:19.323726 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:19.823716262 +0000 UTC m=+228.493842172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.406876 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" event={"ID":"9ab64f63-1297-4556-ae3e-51009cdf2384","Type":"ContainerStarted","Data":"cfa43486d1f93b9efb75ad1d3b8468b46f98302d103ae57871b7053903053dc8"} Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.425228 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:19 crc kubenswrapper[5014]: E0228 04:37:19.425909 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:19.925889502 +0000 UTC m=+228.596015412 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.446448 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjr6k" event={"ID":"87307a00-6574-43d3-b6d8-5b5ee80ce95a","Type":"ContainerStarted","Data":"d0fb284c2dda3ce033095214bbd115071a953ccc1730cac12e025d56bf0c1b2f"} Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.468864 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-x5xfl"] Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.469174 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cghpw" event={"ID":"9f80824d-7fc7-44e3-982c-2856a99523be","Type":"ContainerStarted","Data":"f220f26197d025da314b2a107fd1aa000d5279ef196d964c0ce116ab9deceeaf"} Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.469199 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cghpw" event={"ID":"9f80824d-7fc7-44e3-982c-2856a99523be","Type":"ContainerStarted","Data":"d35f1d583973fd1463ba6f0dffef2bfa0f051bd21c0913bb7cdf1b672bcbc93b"} Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.470497 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-cghpw" Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.475451 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2rth" event={"ID":"992977b5-9456-46f3-9534-01f21a293ed1","Type":"ContainerStarted","Data":"18617dfae195b2be3cd9879083811d1829223e85ce04ed60714f38f60251eef0"} Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.477654 5014 patch_prober.go:28] interesting pod/downloads-7954f5f757-cghpw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.481636 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cghpw" podUID="9f80824d-7fc7-44e3-982c-2856a99523be" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.479832 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rxdxq"] Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.483554 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wc7xs" event={"ID":"b75737b1-5468-47d8-ab73-59b1d3a174a3","Type":"ContainerStarted","Data":"6fd9796f466ed4c0a62b6d373be7a8d5b5253acf13e0de7f410a7d3bc5aeb683"} Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.490583 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zlklt" event={"ID":"560e68a9-862a-4814-a55d-4ea3e9932ea3","Type":"ContainerStarted","Data":"f090a7ac22b119a8322b8b2b0a5b72742d7b6a3c84c855a7bdb0cd32b4f8402e"} Feb 28 04:37:19 crc kubenswrapper[5014]: W0228 04:37:19.491544 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5915ea5d_0cf3_405e_9372_18cfcc5dc993.slice/crio-038663ba595d5435bcbd3cd5633b36d0d41d6d7cc70c81db6509e637bf8597f9 WatchSource:0}: Error finding container 038663ba595d5435bcbd3cd5633b36d0d41d6d7cc70c81db6509e637bf8597f9: Status 404 returned error can't find the container with id 038663ba595d5435bcbd3cd5633b36d0d41d6d7cc70c81db6509e637bf8597f9 Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.494816 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n8xpb" event={"ID":"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b","Type":"ContainerStarted","Data":"7359ff212343f50eb0de16116ac02f48a69c02db1421cde9de4b0ce72b35f3e7"} Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.558581 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:19 crc kubenswrapper[5014]: E0228 04:37:19.559164 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:20.059139665 +0000 UTC m=+228.729265575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.565331 5014 generic.go:334] "Generic (PLEG): container finished" podID="17f469fa-831e-4e38-8ace-55fc476a337c" containerID="e66bb02a1f382c59cec5669325f4b128168152a36d7aba692080393c34518669" exitCode=0 Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.565391 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4sdb" event={"ID":"17f469fa-831e-4e38-8ace-55fc476a337c","Type":"ContainerDied","Data":"e66bb02a1f382c59cec5669325f4b128168152a36d7aba692080393c34518669"} Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.582684 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4sdb" event={"ID":"17f469fa-831e-4e38-8ace-55fc476a337c","Type":"ContainerStarted","Data":"6966b2564c6a66faa33b95f70ad539e3aafc1b621ae08ca24b12e4080df8d9e9"} Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.582720 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt"] Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.587877 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-hbznb" event={"ID":"91c20ddd-76d6-4e47-a24e-ec090ff039de","Type":"ContainerStarted","Data":"35ac46e9a6ed6799fad217ed58b3bd60d5f9de7e04b95ff84a8d47cc5ab776c7"} Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.588950 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-tk557"] Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.591130 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s68g5" event={"ID":"7509910c-9915-4f07-80a6-d0b1eccd9213","Type":"ContainerStarted","Data":"2f1292235342e7b39b17fa36c50d2ebb6fdda17ff1b57eef1f9d45fff8e18b30"} Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.591176 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s68g5" Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.591190 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s68g5" event={"ID":"7509910c-9915-4f07-80a6-d0b1eccd9213","Type":"ContainerStarted","Data":"555d12f455dc11808c46d23c756320d54312e9fc427669112697504bd4d5d425"} Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.593075 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cslg4" event={"ID":"5560e581-fc17-4214-bd6d-2f2332633891","Type":"ContainerStarted","Data":"612e45269be323c86e4f6a8095b2d379fc4d17b50db97e805b814803d203f7a2"} Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.594409 5014 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-s68g5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.594486 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s68g5" podUID="7509910c-9915-4f07-80a6-d0b1eccd9213" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.594634 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5llk" event={"ID":"8c0701a3-3ba4-42cc-b570-bb688909e07d","Type":"ContainerStarted","Data":"c6914fc88b9387aa77ba251eac612b29f303c67112f7a986035910755571427d"} Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.595383 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z87qr" event={"ID":"8bf7d4c6-1fd5-4fa4-a7a3-bf5af08d7eba","Type":"ContainerStarted","Data":"8d3aedcb7ca6c9c18c08441e6c7348faecd93e18de6734889531b1ef18af158a"} Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.596427 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kct58" event={"ID":"671a6723-b559-48d1-957e-a56ee7ef7a64","Type":"ContainerStarted","Data":"20d546a74229b605a6431dd99a9d7265ae77a763d9823c8d1960b99125ce69ca"} Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.597472 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" event={"ID":"56dc15d6-ebc6-459c-9847-c9f8c66dffe4","Type":"ContainerStarted","Data":"1cddd47f9a8e7604446601577fa2295d5fd2ac61d275ef7b1c4c914287234d62"} Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.601025 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mt6mh" event={"ID":"98477023-48bc-48a1-a641-dafcc6b08624","Type":"ContainerStarted","Data":"22f83d627f58cdab1d2cb77b6e8976c9a696517bf5c301171f7211f4a8117a91"} Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.601088 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mt6mh" event={"ID":"98477023-48bc-48a1-a641-dafcc6b08624","Type":"ContainerStarted","Data":"40f1713554ef5d1071ec4b6c09a1e01af3f13e880321e8956bb58307370194d3"} Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.612987 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kckfv" event={"ID":"e957ffda-f443-4f2b-9a8e-4e2fd41beaad","Type":"ContainerStarted","Data":"620e955b90158fea3a8e131b57a50ff6b0ca3b85e7a1d4c0702a0dc1cca19c74"} Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.613055 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kckfv" event={"ID":"e957ffda-f443-4f2b-9a8e-4e2fd41beaad","Type":"ContainerStarted","Data":"62274cb5191243cd9e499a02346d3f3622308fd51136f384218474e34063c2c7"} Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.621624 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tm7nq" event={"ID":"c3f5d06e-976d-42ce-9693-bc41c2ee9154","Type":"ContainerStarted","Data":"a1ae5fd019fe30d90ad2170d4851762c4693b0b8179cd57bf427091c4b9b1190"} Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.638293 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-488hv" event={"ID":"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e","Type":"ContainerStarted","Data":"ce397050d19f8de7d9da226dd26b8ba2696d948fe2126ea5e6d9f4fa1fc35aad"} Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.686443 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:19 crc kubenswrapper[5014]: E0228 04:37:19.689335 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:20.189296369 +0000 UTC m=+228.859422289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.704667 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bl64c"] Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.749081 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-66gzd"] Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.788603 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:19 crc kubenswrapper[5014]: E0228 04:37:19.790365 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:20.290352678 +0000 UTC m=+228.960478588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.828097 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-rc7jm"] Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.860872 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5xc62"] Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.889911 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:19 crc kubenswrapper[5014]: E0228 04:37:19.890398 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:20.390377248 +0000 UTC m=+229.060503158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.897111 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-fxcmt"] Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.920275 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-ddxr6"] Feb 28 04:37:19 crc kubenswrapper[5014]: W0228 04:37:19.987559 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb2c3c93_6469_4c1c_939e_426aaeabfce4.slice/crio-79c924e62c1f975ef3c910eeaf2ce186255eda1968788e963b8f1355be0dcf17 WatchSource:0}: Error finding container 79c924e62c1f975ef3c910eeaf2ce186255eda1968788e963b8f1355be0dcf17: Status 404 returned error can't find the container with id 79c924e62c1f975ef3c910eeaf2ce186255eda1968788e963b8f1355be0dcf17 Feb 28 04:37:19 crc kubenswrapper[5014]: I0228 04:37:19.992544 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:19 crc kubenswrapper[5014]: W0228 04:37:19.992769 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2450d26_3188_41cd_bc8d_cda5368b7db2.slice/crio-d680ebc12be2cd567de554673126905848e3f07cd26211b7ebd4b4683a422afc WatchSource:0}: Error finding container d680ebc12be2cd567de554673126905848e3f07cd26211b7ebd4b4683a422afc: Status 404 returned error can't find the container with id d680ebc12be2cd567de554673126905848e3f07cd26211b7ebd4b4683a422afc Feb 28 04:37:19 crc kubenswrapper[5014]: E0228 04:37:19.993087 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:20.493064943 +0000 UTC m=+229.163191033 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.010130 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-dpwrd"] Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.030357 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-mmkz2"] Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.094387 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:20 crc kubenswrapper[5014]: E0228 04:37:20.094666 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:20.594625076 +0000 UTC m=+229.264750996 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.094771 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:20 crc kubenswrapper[5014]: E0228 04:37:20.095234 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:20.595220992 +0000 UTC m=+229.265346902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.195886 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:20 crc kubenswrapper[5014]: E0228 04:37:20.197325 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:20.696630331 +0000 UTC m=+229.366756241 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.208099 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wtnl5"] Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.208148 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wlspw"] Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.227435 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537556-wwqxk"] Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.241325 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jplqc"] Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.252261 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vzl28"] Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.275781 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wxczw"] Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.295459 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-zmwn2"] Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.296329 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-bpskb" podStartSLOduration=179.296309561 podStartE2EDuration="2m59.296309561s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:20.292529924 +0000 UTC m=+228.962655834" watchObservedRunningTime="2026-02-28 04:37:20.296309561 +0000 UTC m=+228.966435461" Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.299036 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:20 crc kubenswrapper[5014]: E0228 04:37:20.300085 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:20.800066167 +0000 UTC m=+229.470192077 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:20 crc kubenswrapper[5014]: W0228 04:37:20.312591 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bf5e004_613f_44d5_8b27_04f1e555ed88.slice/crio-62d5fa2e928e171b70c20e0a26d63ecf4f6be78e2ed1a368a9b94c88e8f8f88a WatchSource:0}: Error finding container 62d5fa2e928e171b70c20e0a26d63ecf4f6be78e2ed1a368a9b94c88e8f8f88a: Status 404 returned error can't find the container with id 62d5fa2e928e171b70c20e0a26d63ecf4f6be78e2ed1a368a9b94c88e8f8f88a Feb 28 04:37:20 crc kubenswrapper[5014]: W0228 04:37:20.355013 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac663b86_4954_4552_a9bb_a0ea8eff89ef.slice/crio-8ff6b9d6233c81b9b8e148610f6fc5d74af51f8f19e1f1518858200badf4f71d WatchSource:0}: Error finding container 8ff6b9d6233c81b9b8e148610f6fc5d74af51f8f19e1f1518858200badf4f71d: Status 404 returned error can't find the container with id 8ff6b9d6233c81b9b8e148610f6fc5d74af51f8f19e1f1518858200badf4f71d Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.373144 5014 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.400010 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:20 crc kubenswrapper[5014]: E0228 04:37:20.400284 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:20.90021101 +0000 UTC m=+229.570336920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.400519 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:20 crc kubenswrapper[5014]: E0228 04:37:20.401078 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:20.901061604 +0000 UTC m=+229.571187514 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:20 crc kubenswrapper[5014]: W0228 04:37:20.410538 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb60b7614_e66f_4184_b1ff_10fb0ba1ed31.slice/crio-e52d8d9591447f9328694775ada7b2bbe1b9c8efac8fdff6a859c7eac51d0af8 WatchSource:0}: Error finding container e52d8d9591447f9328694775ada7b2bbe1b9c8efac8fdff6a859c7eac51d0af8: Status 404 returned error can't find the container with id e52d8d9591447f9328694775ada7b2bbe1b9c8efac8fdff6a859c7eac51d0af8 Feb 28 04:37:20 crc kubenswrapper[5014]: W0228 04:37:20.423760 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod459eac3b_ce97_42fa_966d_47072347d2b8.slice/crio-9d71210fdf5cbe02f6273eb0c6d1493cfe7ed79fb997a7c153f70c21bb5abcdc WatchSource:0}: Error finding container 9d71210fdf5cbe02f6273eb0c6d1493cfe7ed79fb997a7c153f70c21bb5abcdc: Status 404 returned error can't find the container with id 9d71210fdf5cbe02f6273eb0c6d1493cfe7ed79fb997a7c153f70c21bb5abcdc Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.513465 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:20 crc kubenswrapper[5014]: E0228 04:37:20.513663 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:21.01362995 +0000 UTC m=+229.683755850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.513875 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:20 crc kubenswrapper[5014]: E0228 04:37:20.514260 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:21.014242127 +0000 UTC m=+229.684368037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.532971 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-kckfv" podStartSLOduration=179.532951528 podStartE2EDuration="2m59.532951528s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:20.53230145 +0000 UTC m=+229.202427350" watchObservedRunningTime="2026-02-28 04:37:20.532951528 +0000 UTC m=+229.203077438" Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.566539 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mt6mh" podStartSLOduration=179.566517111 podStartE2EDuration="2m59.566517111s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:20.564095253 +0000 UTC m=+229.234221163" watchObservedRunningTime="2026-02-28 04:37:20.566517111 +0000 UTC m=+229.236643021" Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.604099 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s68g5" podStartSLOduration=179.604074957 podStartE2EDuration="2m59.604074957s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:20.600916548 +0000 UTC m=+229.271042458" watchObservedRunningTime="2026-02-28 04:37:20.604074957 +0000 UTC m=+229.274200867" Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.615025 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:20 crc kubenswrapper[5014]: E0228 04:37:20.615413 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:21.115398929 +0000 UTC m=+229.785524839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.647947 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-cghpw" podStartSLOduration=179.647914842 podStartE2EDuration="2m59.647914842s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:20.640837761 +0000 UTC m=+229.310963671" watchObservedRunningTime="2026-02-28 04:37:20.647914842 +0000 UTC m=+229.318040752" Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.651280 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" event={"ID":"a3d454ff-aef1-427e-a572-c21562fb3659","Type":"ContainerStarted","Data":"d5f9c2117e3a60ba7fd3c756d7b2e5d97c60073dee4c40d29a5312f7bb20bc5c"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.652260 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jplqc" event={"ID":"ead1f1e3-5d6d-4701-8b6f-99cd842d23bc","Type":"ContainerStarted","Data":"a88efc743ac7c17a76c2377582e1f07ae36583d677cf3f5dd1ba54cec85afb69"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.654096 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z87qr" event={"ID":"8bf7d4c6-1fd5-4fa4-a7a3-bf5af08d7eba","Type":"ContainerStarted","Data":"38633fcece39ace6bb5b2781c027c30d290cb15a05087c948839da5769db1d24"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.666284 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zlklt" event={"ID":"560e68a9-862a-4814-a55d-4ea3e9932ea3","Type":"ContainerStarted","Data":"f1a3c5233ffc012ebbf780db4a291c43fd04e4d96461285dd8962c11e357fe9b"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.668772 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x5xfl" event={"ID":"7c8bd670-9de4-422c-9ff3-12f776fbc47f","Type":"ContainerStarted","Data":"c222a6c1956d67bfa99b4607d0f912253edbed0ee87d3276f1a66e31822d9349"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.668793 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x5xfl" event={"ID":"7c8bd670-9de4-422c-9ff3-12f776fbc47f","Type":"ContainerStarted","Data":"da552f6744ca9ed863896bfc15ef9452b77456d68cccdf786796c6d706f06efd"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.674166 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kct58" event={"ID":"671a6723-b559-48d1-957e-a56ee7ef7a64","Type":"ContainerStarted","Data":"d06b396b89a032ff964e9ab7fc10ea30ebfa9f7d8ff8ec5dbad665ec1476999e"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.678577 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n8xpb" event={"ID":"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b","Type":"ContainerStarted","Data":"75a4c1bf46fae910a90bf0ac7440a9495f9a393b6dd20925567b7333f18b10cc"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.680451 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" event={"ID":"56dc15d6-ebc6-459c-9847-c9f8c66dffe4","Type":"ContainerStarted","Data":"403e7d3890786e07592e30b30e741dafceab7b1b05e069dc14623eb1bc63c372"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.681150 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.682316 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" event={"ID":"797b2165-10cf-4886-a106-7f1010672030","Type":"ContainerStarted","Data":"818a2ef10758e585a6e52ca1c37fe1ee86ad46b6d40accb532ccd576d45d3a32"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.683632 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pnbpp" event={"ID":"5915ea5d-0cf3-405e-9372-18cfcc5dc993","Type":"ContainerStarted","Data":"038663ba595d5435bcbd3cd5633b36d0d41d6d7cc70c81db6509e637bf8597f9"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.693235 5014 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-fkqnd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.693519 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" podUID="56dc15d6-ebc6-459c-9847-c9f8c66dffe4" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.696530 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wc7xs" event={"ID":"b75737b1-5468-47d8-ab73-59b1d3a174a3","Type":"ContainerStarted","Data":"ae5ffd41e4fc5f065d2084ab89ee5c26beb712886312b2142ad38eb3ae00bd61"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.699012 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wtnl5" event={"ID":"8bf5e004-613f-44d5-8b27-04f1e555ed88","Type":"ContainerStarted","Data":"62d5fa2e928e171b70c20e0a26d63ecf4f6be78e2ed1a368a9b94c88e8f8f88a"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.725425 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zmwn2" event={"ID":"459eac3b-ce97-42fa-966d-47072347d2b8","Type":"ContainerStarted","Data":"9d71210fdf5cbe02f6273eb0c6d1493cfe7ed79fb997a7c153f70c21bb5abcdc"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.725566 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wlspw" event={"ID":"ac663b86-4954-4552-a9bb-a0ea8eff89ef","Type":"ContainerStarted","Data":"8ff6b9d6233c81b9b8e148610f6fc5d74af51f8f19e1f1518858200badf4f71d"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.725585 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjr6k" event={"ID":"87307a00-6574-43d3-b6d8-5b5ee80ce95a","Type":"ContainerStarted","Data":"7beeaf3803739051879ce259126ec9db0f71465aed13b61e6046d140a004f09a"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.726274 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:20 crc kubenswrapper[5014]: E0228 04:37:20.728402 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:21.228379846 +0000 UTC m=+229.898505756 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.728724 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-zlklt" podStartSLOduration=179.728698905 podStartE2EDuration="2m59.728698905s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:20.726142432 +0000 UTC m=+229.396268342" watchObservedRunningTime="2026-02-28 04:37:20.728698905 +0000 UTC m=+229.398824815" Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.734410 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-488hv" event={"ID":"0060d8e2-8ffe-4a64-9109-57cb6f97ec0e","Type":"ContainerStarted","Data":"70088157d2babcf5441796d9ac22557b9218b672bd9bb497364c30217b1f6534"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.764760 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5xc62" event={"ID":"deecabfd-701d-4737-b267-61d42cf2c52d","Type":"ContainerStarted","Data":"b2d9a8440dd33b7b114191d452e1b4d83da933f6a5e9c2fbbecbaf32b88e8633"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.775073 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-n8xpb" podStartSLOduration=179.775042491 podStartE2EDuration="2m59.775042491s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:20.765303724 +0000 UTC m=+229.435429634" watchObservedRunningTime="2026-02-28 04:37:20.775042491 +0000 UTC m=+229.445168401" Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.777384 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537556-wwqxk" event={"ID":"d84dec61-f4ef-4e0b-adb1-66694017a156","Type":"ContainerStarted","Data":"cd8b5bfc02c146d03f03c977c5b5691b77d82835da0fe2746707b5f206faeff9"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.781262 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5llk" event={"ID":"8c0701a3-3ba4-42cc-b570-bb688909e07d","Type":"ContainerStarted","Data":"575c229b5d81aac4bd8ba22384aa2b7c46d105712473bb0db327bf45db29d504"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.789603 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-cslg4" event={"ID":"5560e581-fc17-4214-bd6d-2f2332633891","Type":"ContainerStarted","Data":"8ac65b76a1c86352e7b7facd91ff25de3b0adcf3eaa9f25e3eb436c59c91c1f3"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.797683 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tjhpt" event={"ID":"01f6c816-1e7c-41ee-90a2-38976e24bac8","Type":"ContainerStarted","Data":"32688a04888567c7ebdc58bdaf5d404baa0cc17597a7de23810706c73dc607a0"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.797718 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tjhpt" event={"ID":"01f6c816-1e7c-41ee-90a2-38976e24bac8","Type":"ContainerStarted","Data":"216881fcb8af06f968d635086e999bf37d461c31b236bcf875af25c7bee5e3d0"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.801731 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tm7nq" event={"ID":"c3f5d06e-976d-42ce-9693-bc41c2ee9154","Type":"ContainerStarted","Data":"1980c3d1d6e141c3ba54a5321bca36e7c2dc40ee619db080bfc7e88929463ec7"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.809610 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" podStartSLOduration=179.809578401 podStartE2EDuration="2m59.809578401s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:20.803143008 +0000 UTC m=+229.473268928" watchObservedRunningTime="2026-02-28 04:37:20.809578401 +0000 UTC m=+229.479704311" Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.810900 5014 generic.go:334] "Generic (PLEG): container finished" podID="9ab64f63-1297-4556-ae3e-51009cdf2384" containerID="5f5cf37e73e9925498618d753b82a0ee753f12463e8fdf0fb6fe234e86440253" exitCode=0 Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.811019 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" event={"ID":"9ab64f63-1297-4556-ae3e-51009cdf2384","Type":"ContainerDied","Data":"5f5cf37e73e9925498618d753b82a0ee753f12463e8fdf0fb6fe234e86440253"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.818727 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-tk557" event={"ID":"e0e433c6-1184-4c3b-993a-53dd1db80f8a","Type":"ContainerStarted","Data":"21e8b3d6646a5b26525e6f69806dce7e3567ba2635a5b7ca07a95291e9e0779b"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.827666 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dpwrd" event={"ID":"e929a2fa-f34f-4100-9d0b-45752ddba504","Type":"ContainerStarted","Data":"014ef19ac27bec524ef71dd63aaa6326e4d06cf4e29c12e6a4721daff3a85b46"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.830165 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wxczw" event={"ID":"b60b7614-e66f-4184-b1ff-10fb0ba1ed31","Type":"ContainerStarted","Data":"e52d8d9591447f9328694775ada7b2bbe1b9c8efac8fdff6a859c7eac51d0af8"} Feb 28 04:37:20 crc kubenswrapper[5014]: E0228 04:37:20.831701 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:21.331677948 +0000 UTC m=+230.001803858 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.832220 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-66gzd" event={"ID":"db2c3c93-6469-4c1c-939e-426aaeabfce4","Type":"ContainerStarted","Data":"79c924e62c1f975ef3c910eeaf2ce186255eda1968788e963b8f1355be0dcf17"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.831051 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.837113 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.840874 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-mmkz2" event={"ID":"9d0d0159-396b-49a3-a9bd-3346a06f0556","Type":"ContainerStarted","Data":"6b03a62e1532a8c838c9a63669d494414e2109a24b60be843ac70a17605cc0b2"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.843062 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-z87qr" podStartSLOduration=179.843049411 podStartE2EDuration="2m59.843049411s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:20.841981121 +0000 UTC m=+229.512107051" watchObservedRunningTime="2026-02-28 04:37:20.843049411 +0000 UTC m=+229.513175321" Feb 28 04:37:20 crc kubenswrapper[5014]: E0228 04:37:20.843426 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:21.343405141 +0000 UTC m=+230.013531051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.851505 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" event={"ID":"b9aa43b2-c924-4e9d-8c38-cf39ee922d3e","Type":"ContainerStarted","Data":"30424cb4daced262e91c198603e6243713c3d6382436bd76be43bcdc052f2f9b"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.859196 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ddxr6" event={"ID":"96429a28-52a4-4465-810a-1bdfa6dee2bf","Type":"ContainerStarted","Data":"5e5f7fbc7eef15647c4d7ed4ad862833f59138d1d3ed47c918a90889e523096d"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.888171 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2rth" event={"ID":"992977b5-9456-46f3-9534-01f21a293ed1","Type":"ContainerStarted","Data":"327691e44a31274a143d0af53987b489eb0e6fd54a61b12099b49c6d31068301"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.890472 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vzl28" event={"ID":"85352047-6877-471a-8f68-76e28f6be644","Type":"ContainerStarted","Data":"7e14b83f0c827b7110c5cceede9f3de7fce2a1ab99d3f89b68424ed109de2072"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.894562 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-hbznb" event={"ID":"91c20ddd-76d6-4e47-a24e-ec090ff039de","Type":"ContainerStarted","Data":"f2d472137bc70dded2e55dee4388671fe35e0e9737e1a729e2dfdfd32b64eb66"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.896187 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h4z55" event={"ID":"6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3","Type":"ContainerStarted","Data":"76f9b90f7afd28112a034908809e19d16615c6c37c1d25bb423f8481504e260c"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.896208 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h4z55" event={"ID":"6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3","Type":"ContainerStarted","Data":"0ab266fdc3e878d8edcf8ad85b94f35b7371ec2d59cb9311447469d809e26ff7"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.897349 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rxdxq" event={"ID":"39e99a31-4c12-4e77-918c-a7229c6899e9","Type":"ContainerStarted","Data":"1b00d9488644d088e4c920b4375ddb38ebccdef2de15c94444051c0c7351c7ab"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.899575 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rc7jm" event={"ID":"c2450d26-3188-41cd-bc8d-cda5368b7db2","Type":"ContainerStarted","Data":"d680ebc12be2cd567de554673126905848e3f07cd26211b7ebd4b4683a422afc"} Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.899749 5014 patch_prober.go:28] interesting pod/downloads-7954f5f757-cghpw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.899788 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cghpw" podUID="9f80824d-7fc7-44e3-982c-2856a99523be" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.900059 5014 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-s68g5 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.900255 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s68g5" podUID="7509910c-9915-4f07-80a6-d0b1eccd9213" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.900308 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-kct58" podStartSLOduration=179.900296916 podStartE2EDuration="2m59.900296916s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:20.882434359 +0000 UTC m=+229.552560269" watchObservedRunningTime="2026-02-28 04:37:20.900296916 +0000 UTC m=+229.570422826" Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.925493 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-h4z55" Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.928702 5014 patch_prober.go:28] interesting pod/router-default-5444994796-h4z55 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.928859 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h4z55" podUID="6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.929045 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-cslg4" podStartSLOduration=179.929009911 podStartE2EDuration="2m59.929009911s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:20.924674278 +0000 UTC m=+229.594800178" watchObservedRunningTime="2026-02-28 04:37:20.929009911 +0000 UTC m=+229.599135821" Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.940625 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:20 crc kubenswrapper[5014]: E0228 04:37:20.942393 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:21.442373751 +0000 UTC m=+230.112499651 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:20 crc kubenswrapper[5014]: I0228 04:37:20.961972 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-tjhpt" podStartSLOduration=5.961953857 podStartE2EDuration="5.961953857s" podCreationTimestamp="2026-02-28 04:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:20.96103418 +0000 UTC m=+229.631160090" watchObservedRunningTime="2026-02-28 04:37:20.961953857 +0000 UTC m=+229.632079767" Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.021130 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-wc7xs" podStartSLOduration=180.021108456 podStartE2EDuration="3m0.021108456s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:21.006496701 +0000 UTC m=+229.676622621" watchObservedRunningTime="2026-02-28 04:37:21.021108456 +0000 UTC m=+229.691234366" Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.046160 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:21 crc kubenswrapper[5014]: E0228 04:37:21.054104 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:21.554083841 +0000 UTC m=+230.224209751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.057149 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tm7nq" podStartSLOduration=180.057125088 podStartE2EDuration="3m0.057125088s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:21.050469759 +0000 UTC m=+229.720595669" watchObservedRunningTime="2026-02-28 04:37:21.057125088 +0000 UTC m=+229.727250998" Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.141754 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-488hv" podStartSLOduration=180.14172225 podStartE2EDuration="3m0.14172225s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:21.11038121 +0000 UTC m=+229.780507130" watchObservedRunningTime="2026-02-28 04:37:21.14172225 +0000 UTC m=+229.811848170" Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.148011 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:21 crc kubenswrapper[5014]: E0228 04:37:21.148422 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:21.648403419 +0000 UTC m=+230.318529329 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.183999 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rjr6k" podStartSLOduration=180.183957228 podStartE2EDuration="3m0.183957228s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:21.146017651 +0000 UTC m=+229.816143561" watchObservedRunningTime="2026-02-28 04:37:21.183957228 +0000 UTC m=+229.854083138" Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.251416 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:21 crc kubenswrapper[5014]: E0228 04:37:21.253071 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:21.75304344 +0000 UTC m=+230.423169510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.256333 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-hbznb" podStartSLOduration=180.256310783 podStartE2EDuration="3m0.256310783s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:21.207120986 +0000 UTC m=+229.877246896" watchObservedRunningTime="2026-02-28 04:37:21.256310783 +0000 UTC m=+229.926436693" Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.256546 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-g2rth" podStartSLOduration=180.256541489 podStartE2EDuration="3m0.256541489s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:21.253230115 +0000 UTC m=+229.923356035" watchObservedRunningTime="2026-02-28 04:37:21.256541489 +0000 UTC m=+229.926667399" Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.296927 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-h4z55" podStartSLOduration=180.296900264 podStartE2EDuration="3m0.296900264s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:21.294073994 +0000 UTC m=+229.964199924" watchObservedRunningTime="2026-02-28 04:37:21.296900264 +0000 UTC m=+229.967026164" Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.353992 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:21 crc kubenswrapper[5014]: E0228 04:37:21.357265 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:21.855330953 +0000 UTC m=+230.525456853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.456109 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:21 crc kubenswrapper[5014]: E0228 04:37:21.456688 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:21.95667432 +0000 UTC m=+230.626800230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.557796 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:21 crc kubenswrapper[5014]: E0228 04:37:21.558914 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:22.058885812 +0000 UTC m=+230.729011722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.660945 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:21 crc kubenswrapper[5014]: E0228 04:37:21.661600 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:22.161576556 +0000 UTC m=+230.831702466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.762080 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:21 crc kubenswrapper[5014]: E0228 04:37:21.762296 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:22.262256704 +0000 UTC m=+230.932382614 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.762571 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:21 crc kubenswrapper[5014]: E0228 04:37:21.762985 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:22.262970005 +0000 UTC m=+230.933095915 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.866342 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:21 crc kubenswrapper[5014]: E0228 04:37:21.866572 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:22.366521075 +0000 UTC m=+231.036646995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.867060 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:21 crc kubenswrapper[5014]: E0228 04:37:21.867486 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:22.367468631 +0000 UTC m=+231.037594541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.911662 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.911718 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.912946 5014 patch_prober.go:28] interesting pod/apiserver-76f77b778f-488hv container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.913014 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-488hv" podUID="0060d8e2-8ffe-4a64-9109-57cb6f97ec0e" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.5:8443/livez\": dial tcp 10.217.0.5:8443: connect: connection refused" Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.932397 5014 patch_prober.go:28] interesting pod/router-default-5444994796-h4z55 container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.932479 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h4z55" podUID="6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Feb 28 04:37:21 crc kubenswrapper[5014]: I0228 04:37:21.970767 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:21 crc kubenswrapper[5014]: E0228 04:37:21.971265 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:22.471243337 +0000 UTC m=+231.141369247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.004319 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wlspw" event={"ID":"ac663b86-4954-4552-a9bb-a0ea8eff89ef","Type":"ContainerStarted","Data":"053f2f962a29a8cb5c292a85cd9cc8ac6b66259fd7daeb21a9bb031f600f4969"} Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.012910 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rxdxq" event={"ID":"39e99a31-4c12-4e77-918c-a7229c6899e9","Type":"ContainerStarted","Data":"6cfaf15f5a84566f50fa16533ca1d4288bcc37fb99379749b71d7712eadb6886"} Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.024561 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wtnl5" event={"ID":"8bf5e004-613f-44d5-8b27-04f1e555ed88","Type":"ContainerStarted","Data":"e0030be789bea90e06cc33e0f54e40aa892f64a9cdcf8c160d477b21154a9e44"} Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.055052 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wxczw" event={"ID":"b60b7614-e66f-4184-b1ff-10fb0ba1ed31","Type":"ContainerStarted","Data":"36b6894dfff18f968ac331dbaf2d9dcd27119fcfaad1529df1d59395f320b824"} Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.055177 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wxczw" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.057536 5014 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wxczw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.057608 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wxczw" podUID="b60b7614-e66f-4184-b1ff-10fb0ba1ed31" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.073938 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:22 crc kubenswrapper[5014]: E0228 04:37:22.075362 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:22.575347992 +0000 UTC m=+231.245473902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.078272 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-wlspw" podStartSLOduration=181.078250925 podStartE2EDuration="3m1.078250925s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:22.076129344 +0000 UTC m=+230.746255254" watchObservedRunningTime="2026-02-28 04:37:22.078250925 +0000 UTC m=+230.748376835" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.085317 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pnbpp" event={"ID":"5915ea5d-0cf3-405e-9372-18cfcc5dc993","Type":"ContainerStarted","Data":"9787edc51c2f49bc425dc6b1a7e64c0cfe20d5d0791add851430091b296d6c45"} Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.085381 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pnbpp" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.089099 5014 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-pnbpp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.089201 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pnbpp" podUID="5915ea5d-0cf3-405e-9372-18cfcc5dc993" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.114070 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dpwrd" event={"ID":"e929a2fa-f34f-4100-9d0b-45752ddba504","Type":"ContainerStarted","Data":"7f0950c5b5015610b6bbf49dfaa08239890a57ead25b84765fb2a24150aac978"} Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.151491 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-rxdxq" podStartSLOduration=181.151473063 podStartE2EDuration="3m1.151473063s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:22.149482237 +0000 UTC m=+230.819608147" watchObservedRunningTime="2026-02-28 04:37:22.151473063 +0000 UTC m=+230.821598973" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.157195 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rc7jm" event={"ID":"c2450d26-3188-41cd-bc8d-cda5368b7db2","Type":"ContainerStarted","Data":"48937109b645eb821e422bfe3a3e205f55d6f945e31d76f23978ed06fbce7f73"} Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.157260 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rc7jm" event={"ID":"c2450d26-3188-41cd-bc8d-cda5368b7db2","Type":"ContainerStarted","Data":"f892ccdcc718f263701ca9fdfa412db7f8d7ab9600d45303a3acae1d2cba5c38"} Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.175337 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:22 crc kubenswrapper[5014]: E0228 04:37:22.179080 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:22.679037216 +0000 UTC m=+231.349163296 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.203508 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5xc62" event={"ID":"deecabfd-701d-4737-b267-61d42cf2c52d","Type":"ContainerStarted","Data":"92eae11c636c46f861491a2874936db778d137f7397c0cdb1c8f3b143ddf844d"} Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.213635 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ddxr6" event={"ID":"96429a28-52a4-4465-810a-1bdfa6dee2bf","Type":"ContainerStarted","Data":"30cf0b9ef5f08a7fb452d678155540aaa7c5fa15442fd1ccc4676d3e565849e2"} Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.213691 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ddxr6" event={"ID":"96429a28-52a4-4465-810a-1bdfa6dee2bf","Type":"ContainerStarted","Data":"97eb589fb2313198df9cc9d0b03e794c78442ece6cb2df16365d480d2e30fdaf"} Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.240647 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wxczw" podStartSLOduration=181.240618874 podStartE2EDuration="3m1.240618874s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:22.239644176 +0000 UTC m=+230.909770086" watchObservedRunningTime="2026-02-28 04:37:22.240618874 +0000 UTC m=+230.910744784" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.244223 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5llk" event={"ID":"8c0701a3-3ba4-42cc-b570-bb688909e07d","Type":"ContainerStarted","Data":"8c18c274e38f29d9adfe76fd239c734639cc246d7620422c8c93a743d9b48a81"} Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.244415 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5llk" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.260563 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" event={"ID":"797b2165-10cf-4886-a106-7f1010672030","Type":"ContainerStarted","Data":"dba19494527a57c18b2cba26233275af18bfd66b45a1e4cf3ab20e9732be14a4"} Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.260879 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.269363 5014 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bl64c container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.269438 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" podUID="797b2165-10cf-4886-a106-7f1010672030" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.277981 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:22 crc kubenswrapper[5014]: E0228 04:37:22.279311 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:22.779293732 +0000 UTC m=+231.449419862 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.302296 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-5xc62" podStartSLOduration=181.302274184 podStartE2EDuration="3m1.302274184s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:22.269153023 +0000 UTC m=+230.939278933" watchObservedRunningTime="2026-02-28 04:37:22.302274184 +0000 UTC m=+230.972400094" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.316828 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-66gzd" event={"ID":"db2c3c93-6469-4c1c-939e-426aaeabfce4","Type":"ContainerStarted","Data":"d4e30129598ea31c379cb9f471a273a1889315b2d8997ad227b6087c989ab9d7"} Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.317140 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-66gzd" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.323493 5014 patch_prober.go:28] interesting pod/console-operator-58897d9998-66gzd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.323568 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-66gzd" podUID="db2c3c93-6469-4c1c-939e-426aaeabfce4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.327141 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x5xfl" event={"ID":"7c8bd670-9de4-422c-9ff3-12f776fbc47f","Type":"ContainerStarted","Data":"f5cde2487340de4f6f27bef6194043231e8b8ec51ad152e803f9585e4e7efce8"} Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.342749 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-ddxr6" podStartSLOduration=181.342722082 podStartE2EDuration="3m1.342722082s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:22.301007898 +0000 UTC m=+230.971133798" watchObservedRunningTime="2026-02-28 04:37:22.342722082 +0000 UTC m=+231.012847992" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.342910 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pnbpp" podStartSLOduration=180.342903018 podStartE2EDuration="3m0.342903018s" podCreationTimestamp="2026-02-28 04:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:22.341299652 +0000 UTC m=+231.011425562" watchObservedRunningTime="2026-02-28 04:37:22.342903018 +0000 UTC m=+231.013028938" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.372039 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-rc7jm" podStartSLOduration=181.372021374 podStartE2EDuration="3m1.372021374s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:22.369505632 +0000 UTC m=+231.039631552" watchObservedRunningTime="2026-02-28 04:37:22.372021374 +0000 UTC m=+231.042147284" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.376860 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vzl28" event={"ID":"85352047-6877-471a-8f68-76e28f6be644","Type":"ContainerStarted","Data":"894302431bbd1027f45eb16576d8809d6de16bdcaa38f30df7f6728d0377f263"} Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.378528 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:22 crc kubenswrapper[5014]: E0228 04:37:22.380528 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:22.880500095 +0000 UTC m=+231.550626165 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.403153 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5llk" podStartSLOduration=181.403126967 podStartE2EDuration="3m1.403126967s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:22.402405647 +0000 UTC m=+231.072531567" watchObservedRunningTime="2026-02-28 04:37:22.403126967 +0000 UTC m=+231.073252877" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.424093 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4sdb" event={"ID":"17f469fa-831e-4e38-8ace-55fc476a337c","Type":"ContainerStarted","Data":"7500e657dd2ea69f9241426ef06100c4976a559752cfb1100e6d69723b9016fd"} Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.424199 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4sdb" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.435556 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-66gzd" podStartSLOduration=181.435518567 podStartE2EDuration="3m1.435518567s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:22.433301594 +0000 UTC m=+231.103427514" watchObservedRunningTime="2026-02-28 04:37:22.435518567 +0000 UTC m=+231.105644477" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.440953 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-mmkz2" event={"ID":"9d0d0159-396b-49a3-a9bd-3346a06f0556","Type":"ContainerStarted","Data":"34cc2423f5a59657593f9afb385d1f855d080646d8a33677ae94f2a04334d6df"} Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.461346 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zmwn2" event={"ID":"459eac3b-ce97-42fa-966d-47072347d2b8","Type":"ContainerStarted","Data":"646649d467e084e660191c0c63eab7e43567cabf54f67d9df7605b971c423ed9"} Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.463530 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x5xfl" podStartSLOduration=181.463515611 podStartE2EDuration="3m1.463515611s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:22.462461601 +0000 UTC m=+231.132587511" watchObservedRunningTime="2026-02-28 04:37:22.463515611 +0000 UTC m=+231.133641521" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.484569 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" event={"ID":"9ab64f63-1297-4556-ae3e-51009cdf2384","Type":"ContainerStarted","Data":"0b5932e651760c03d46f8a2fda317fedc6d9eb1cdd3df8e72303da36a42def56"} Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.486667 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:22 crc kubenswrapper[5014]: E0228 04:37:22.487154 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:22.987136742 +0000 UTC m=+231.657262652 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.520600 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" event={"ID":"b9aa43b2-c924-4e9d-8c38-cf39ee922d3e","Type":"ContainerStarted","Data":"17e57e3bc82aa20ea8544f04571d937a23451ef206f456824d959a2c83a171a0"} Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.524031 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.537814 5014 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-qmrxt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.537887 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" podUID="b9aa43b2-c924-4e9d-8c38-cf39ee922d3e" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.538898 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" podStartSLOduration=181.538882871 podStartE2EDuration="3m1.538882871s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:22.521422955 +0000 UTC m=+231.191548865" watchObservedRunningTime="2026-02-28 04:37:22.538882871 +0000 UTC m=+231.209008781" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.545910 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-tk557" event={"ID":"e0e433c6-1184-4c3b-993a-53dd1db80f8a","Type":"ContainerStarted","Data":"b84f512405d8d3122230d1dc0e2057f2e1a0f76873985960520683f15ffb6d69"} Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.568668 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jplqc" event={"ID":"ead1f1e3-5d6d-4701-8b6f-99cd842d23bc","Type":"ContainerStarted","Data":"03dcd79c1fd1fabf98c9a578a68e8c1fc08c802499b35687f2b06efb9bfdc924"} Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.570074 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jplqc" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.578108 5014 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-jplqc container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.578197 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jplqc" podUID="ead1f1e3-5d6d-4701-8b6f-99cd842d23bc" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.578427 5014 patch_prober.go:28] interesting pod/downloads-7954f5f757-cghpw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.578483 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cghpw" podUID="9f80824d-7fc7-44e3-982c-2856a99523be" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.588172 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:22 crc kubenswrapper[5014]: E0228 04:37:22.589774 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:23.089737164 +0000 UTC m=+231.759863074 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.621268 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4sdb" podStartSLOduration=181.621243418 podStartE2EDuration="3m1.621243418s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:22.620280241 +0000 UTC m=+231.290406141" watchObservedRunningTime="2026-02-28 04:37:22.621243418 +0000 UTC m=+231.291369328" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.621767 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-vzl28" podStartSLOduration=180.621762773 podStartE2EDuration="3m0.621762773s" podCreationTimestamp="2026-02-28 04:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:22.579987807 +0000 UTC m=+231.250113717" watchObservedRunningTime="2026-02-28 04:37:22.621762773 +0000 UTC m=+231.291888683" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.689312 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-mmkz2" podStartSLOduration=7.68928108 podStartE2EDuration="7.68928108s" podCreationTimestamp="2026-02-28 04:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:22.688075686 +0000 UTC m=+231.358201596" watchObservedRunningTime="2026-02-28 04:37:22.68928108 +0000 UTC m=+231.359406990" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.690693 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.691240 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" podStartSLOduration=181.691217015 podStartE2EDuration="3m1.691217015s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:22.665392262 +0000 UTC m=+231.335518172" watchObservedRunningTime="2026-02-28 04:37:22.691217015 +0000 UTC m=+231.361342925" Feb 28 04:37:22 crc kubenswrapper[5014]: E0228 04:37:22.699359 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:23.199337325 +0000 UTC m=+231.869463235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.720125 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-zmwn2" podStartSLOduration=180.720097314 podStartE2EDuration="3m0.720097314s" podCreationTimestamp="2026-02-28 04:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:22.718658634 +0000 UTC m=+231.388784534" watchObservedRunningTime="2026-02-28 04:37:22.720097314 +0000 UTC m=+231.390223224" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.793463 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:22 crc kubenswrapper[5014]: E0228 04:37:22.793940 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:23.293923261 +0000 UTC m=+231.964049171 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.806148 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" podStartSLOduration=180.806106716 podStartE2EDuration="3m0.806106716s" podCreationTimestamp="2026-02-28 04:34:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:22.776573227 +0000 UTC m=+231.446699137" watchObservedRunningTime="2026-02-28 04:37:22.806106716 +0000 UTC m=+231.476232616" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.829598 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-tk557" podStartSLOduration=181.829569282 podStartE2EDuration="3m1.829569282s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:22.826798133 +0000 UTC m=+231.496924043" watchObservedRunningTime="2026-02-28 04:37:22.829569282 +0000 UTC m=+231.499695192" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.867908 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.868119 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.879154 5014 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-t8497 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.879253 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" podUID="9ab64f63-1297-4556-ae3e-51009cdf2384" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.8:8443/livez\": dial tcp 10.217.0.8:8443: connect: connection refused" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.895604 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:22 crc kubenswrapper[5014]: E0228 04:37:22.896056 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:23.396042799 +0000 UTC m=+232.066168699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.905191 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jplqc" podStartSLOduration=181.905168418 podStartE2EDuration="3m1.905168418s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:22.904554351 +0000 UTC m=+231.574680261" watchObservedRunningTime="2026-02-28 04:37:22.905168418 +0000 UTC m=+231.575294328" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.935333 5014 patch_prober.go:28] interesting pod/router-default-5444994796-h4z55 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 04:37:22 crc kubenswrapper[5014]: [-]has-synced failed: reason withheld Feb 28 04:37:22 crc kubenswrapper[5014]: [+]process-running ok Feb 28 04:37:22 crc kubenswrapper[5014]: healthz check failed Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.935402 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h4z55" podUID="6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.998011 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:22 crc kubenswrapper[5014]: E0228 04:37:22.998304 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:23.498252921 +0000 UTC m=+232.168378841 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:22 crc kubenswrapper[5014]: I0228 04:37:22.998696 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:22 crc kubenswrapper[5014]: E0228 04:37:22.999125 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:23.499110335 +0000 UTC m=+232.169236245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.099962 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:23 crc kubenswrapper[5014]: E0228 04:37:23.100189 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:23.600156213 +0000 UTC m=+232.270282113 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.100258 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:23 crc kubenswrapper[5014]: E0228 04:37:23.100681 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:23.600659898 +0000 UTC m=+232.270785958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.202164 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:23 crc kubenswrapper[5014]: E0228 04:37:23.202675 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:23.702654142 +0000 UTC m=+232.372780052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.254958 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.303880 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:23 crc kubenswrapper[5014]: E0228 04:37:23.304384 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:23.8043656 +0000 UTC m=+232.474491510 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.410621 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:23 crc kubenswrapper[5014]: E0228 04:37:23.411076 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:23.911020648 +0000 UTC m=+232.581146558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.411385 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:23 crc kubenswrapper[5014]: E0228 04:37:23.412002 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:23.911977215 +0000 UTC m=+232.582103135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.465484 5014 ???:1] "http: TLS handshake error from 192.168.126.11:47230: no serving certificate available for the kubelet" Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.512604 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:23 crc kubenswrapper[5014]: E0228 04:37:23.512843 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:24.012794237 +0000 UTC m=+232.682920147 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.513101 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:23 crc kubenswrapper[5014]: E0228 04:37:23.513476 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:24.013461605 +0000 UTC m=+232.683587515 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.563428 5014 ???:1] "http: TLS handshake error from 192.168.126.11:47244: no serving certificate available for the kubelet" Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.599692 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wtnl5" event={"ID":"8bf5e004-613f-44d5-8b27-04f1e555ed88","Type":"ContainerStarted","Data":"2fa3db5e265fbbf4784dffd3a51bbd06f85a734989e9f55f700b25752a33f5b5"} Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.609467 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-dpwrd" event={"ID":"e929a2fa-f34f-4100-9d0b-45752ddba504","Type":"ContainerStarted","Data":"009b970328b4b6ba6d5548fb497056681ace998cc1e464e01bcf0849a2ab8af6"} Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.610107 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-dpwrd" Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.615601 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" event={"ID":"a3d454ff-aef1-427e-a572-c21562fb3659","Type":"ContainerStarted","Data":"b0e1b403f758bd1b76d4b44298c681e3232faad0cb86670ddd21ca546b988d80"} Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.617916 5014 patch_prober.go:28] interesting pod/console-operator-58897d9998-66gzd container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.617993 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-66gzd" podUID="db2c3c93-6469-4c1c-939e-426aaeabfce4" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/readyz\": dial tcp 10.217.0.25:8443: connect: connection refused" Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.618330 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.618769 5014 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-bl64c container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.618798 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" podUID="797b2165-10cf-4886-a106-7f1010672030" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.619030 5014 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-wxczw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" start-of-body= Feb 28 04:37:23 crc kubenswrapper[5014]: E0228 04:37:23.619038 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:24.119018702 +0000 UTC m=+232.789144612 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.619051 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-wxczw" podUID="b60b7614-e66f-4184-b1ff-10fb0ba1ed31" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.41:8080/healthz\": dial tcp 10.217.0.41:8080: connect: connection refused" Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.631429 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-wtnl5" podStartSLOduration=182.631402394 podStartE2EDuration="3m2.631402394s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:23.627850823 +0000 UTC m=+232.297976723" watchObservedRunningTime="2026-02-28 04:37:23.631402394 +0000 UTC m=+232.301528304" Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.672184 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-dpwrd" podStartSLOduration=8.672157551 podStartE2EDuration="8.672157551s" podCreationTimestamp="2026-02-28 04:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:23.668580339 +0000 UTC m=+232.338706249" watchObservedRunningTime="2026-02-28 04:37:23.672157551 +0000 UTC m=+232.342283461" Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.698795 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-jplqc" Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.723033 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:23 crc kubenswrapper[5014]: E0228 04:37:23.724629 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:24.22460606 +0000 UTC m=+232.894732010 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.786381 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.792391 5014 ???:1] "http: TLS handshake error from 192.168.126.11:47250: no serving certificate available for the kubelet" Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.825252 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:23 crc kubenswrapper[5014]: E0228 04:37:23.825676 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:24.325651268 +0000 UTC m=+232.995777178 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.926742 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:23 crc kubenswrapper[5014]: E0228 04:37:23.927269 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:24.427246822 +0000 UTC m=+233.097372732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.930560 5014 patch_prober.go:28] interesting pod/router-default-5444994796-h4z55 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 04:37:23 crc kubenswrapper[5014]: [-]has-synced failed: reason withheld Feb 28 04:37:23 crc kubenswrapper[5014]: [+]process-running ok Feb 28 04:37:23 crc kubenswrapper[5014]: healthz check failed Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.930627 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h4z55" podUID="6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 04:37:23 crc kubenswrapper[5014]: I0228 04:37:23.977583 5014 ???:1] "http: TLS handshake error from 192.168.126.11:47256: no serving certificate available for the kubelet" Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.028165 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:24 crc kubenswrapper[5014]: E0228 04:37:24.028385 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:24.528352492 +0000 UTC m=+233.198478402 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.029061 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:24 crc kubenswrapper[5014]: E0228 04:37:24.029603 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:24.529583917 +0000 UTC m=+233.199709827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.129997 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:24 crc kubenswrapper[5014]: E0228 04:37:24.130566 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:24.630549543 +0000 UTC m=+233.300675453 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.146952 5014 ???:1] "http: TLS handshake error from 192.168.126.11:47270: no serving certificate available for the kubelet" Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.232576 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:24 crc kubenswrapper[5014]: E0228 04:37:24.233104 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:24.733079573 +0000 UTC m=+233.403205553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.307251 5014 ???:1] "http: TLS handshake error from 192.168.126.11:47272: no serving certificate available for the kubelet" Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.335842 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:24 crc kubenswrapper[5014]: E0228 04:37:24.336347 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:24.836319564 +0000 UTC m=+233.506445474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.399725 5014 ???:1] "http: TLS handshake error from 192.168.126.11:47284: no serving certificate available for the kubelet" Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.437738 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:24 crc kubenswrapper[5014]: E0228 04:37:24.438248 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:24.938224317 +0000 UTC m=+233.608350227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.538836 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:24 crc kubenswrapper[5014]: E0228 04:37:24.539082 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:25.039050339 +0000 UTC m=+233.709176249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.539155 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:24 crc kubenswrapper[5014]: E0228 04:37:24.539711 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:25.039685927 +0000 UTC m=+233.709811837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.616835 5014 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-w4sdb container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.616924 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4sdb" podUID="17f469fa-831e-4e38-8ace-55fc476a337c" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.616846 5014 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-pnbpp container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.617414 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pnbpp" podUID="5915ea5d-0cf3-405e-9372-18cfcc5dc993" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.628607 5014 ???:1] "http: TLS handshake error from 192.168.126.11:47298: no serving certificate available for the kubelet" Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.637314 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-w4sdb" Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.644908 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:24 crc kubenswrapper[5014]: E0228 04:37:24.645340 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:25.145322245 +0000 UTC m=+233.815448155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.707640 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.750357 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:24 crc kubenswrapper[5014]: E0228 04:37:24.753287 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:25.25326679 +0000 UTC m=+233.923392700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.788935 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-66gzd" Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.850951 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:24 crc kubenswrapper[5014]: E0228 04:37:24.851273 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:25.351182039 +0000 UTC m=+234.021307949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.851680 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:24 crc kubenswrapper[5014]: E0228 04:37:24.852088 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:25.352070924 +0000 UTC m=+234.022196834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.930050 5014 patch_prober.go:28] interesting pod/router-default-5444994796-h4z55 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 04:37:24 crc kubenswrapper[5014]: [-]has-synced failed: reason withheld Feb 28 04:37:24 crc kubenswrapper[5014]: [+]process-running ok Feb 28 04:37:24 crc kubenswrapper[5014]: healthz check failed Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.930138 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h4z55" podUID="6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 04:37:24 crc kubenswrapper[5014]: I0228 04:37:24.953348 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:24 crc kubenswrapper[5014]: E0228 04:37:24.953790 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:25.453766742 +0000 UTC m=+234.123892652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.055214 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:25 crc kubenswrapper[5014]: E0228 04:37:25.055610 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:25.555597062 +0000 UTC m=+234.225722972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.149685 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9cznf"] Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.150785 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9cznf" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.156922 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:25 crc kubenswrapper[5014]: E0228 04:37:25.157426 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:25.657390081 +0000 UTC m=+234.327515991 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.159103 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.170846 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9cznf"] Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.258724 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52079806-fc0c-4852-8150-0123d376c1b2-catalog-content\") pod \"community-operators-9cznf\" (UID: \"52079806-fc0c-4852-8150-0123d376c1b2\") " pod="openshift-marketplace/community-operators-9cznf" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.258821 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkdvt\" (UniqueName: \"kubernetes.io/projected/52079806-fc0c-4852-8150-0123d376c1b2-kube-api-access-qkdvt\") pod \"community-operators-9cznf\" (UID: \"52079806-fc0c-4852-8150-0123d376c1b2\") " pod="openshift-marketplace/community-operators-9cznf" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.259003 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52079806-fc0c-4852-8150-0123d376c1b2-utilities\") pod \"community-operators-9cznf\" (UID: \"52079806-fc0c-4852-8150-0123d376c1b2\") " pod="openshift-marketplace/community-operators-9cznf" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.259155 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:25 crc kubenswrapper[5014]: E0228 04:37:25.259626 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:25.759611594 +0000 UTC m=+234.429737504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.360526 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sqfvs"] Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.361414 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.361650 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkdvt\" (UniqueName: \"kubernetes.io/projected/52079806-fc0c-4852-8150-0123d376c1b2-kube-api-access-qkdvt\") pod \"community-operators-9cznf\" (UID: \"52079806-fc0c-4852-8150-0123d376c1b2\") " pod="openshift-marketplace/community-operators-9cznf" Feb 28 04:37:25 crc kubenswrapper[5014]: E0228 04:37:25.361706 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:25.86166882 +0000 UTC m=+234.531794730 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.361837 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52079806-fc0c-4852-8150-0123d376c1b2-utilities\") pod \"community-operators-9cznf\" (UID: \"52079806-fc0c-4852-8150-0123d376c1b2\") " pod="openshift-marketplace/community-operators-9cznf" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.361990 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.362218 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52079806-fc0c-4852-8150-0123d376c1b2-catalog-content\") pod \"community-operators-9cznf\" (UID: \"52079806-fc0c-4852-8150-0123d376c1b2\") " pod="openshift-marketplace/community-operators-9cznf" Feb 28 04:37:25 crc kubenswrapper[5014]: E0228 04:37:25.362390 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:25.86237584 +0000 UTC m=+234.532501750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.362430 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52079806-fc0c-4852-8150-0123d376c1b2-utilities\") pod \"community-operators-9cznf\" (UID: \"52079806-fc0c-4852-8150-0123d376c1b2\") " pod="openshift-marketplace/community-operators-9cznf" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.362744 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52079806-fc0c-4852-8150-0123d376c1b2-catalog-content\") pod \"community-operators-9cznf\" (UID: \"52079806-fc0c-4852-8150-0123d376c1b2\") " pod="openshift-marketplace/community-operators-9cznf" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.363330 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqfvs" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.366316 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.374619 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sqfvs"] Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.387678 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkdvt\" (UniqueName: \"kubernetes.io/projected/52079806-fc0c-4852-8150-0123d376c1b2-kube-api-access-qkdvt\") pod \"community-operators-9cznf\" (UID: \"52079806-fc0c-4852-8150-0123d376c1b2\") " pod="openshift-marketplace/community-operators-9cznf" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.408253 5014 ???:1] "http: TLS handshake error from 192.168.126.11:47306: no serving certificate available for the kubelet" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.464619 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.464794 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50cf3400-fb73-4038-b616-2d3559aaf784-utilities\") pod \"certified-operators-sqfvs\" (UID: \"50cf3400-fb73-4038-b616-2d3559aaf784\") " pod="openshift-marketplace/certified-operators-sqfvs" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.464912 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50cf3400-fb73-4038-b616-2d3559aaf784-catalog-content\") pod \"certified-operators-sqfvs\" (UID: \"50cf3400-fb73-4038-b616-2d3559aaf784\") " pod="openshift-marketplace/certified-operators-sqfvs" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.464934 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmkg5\" (UniqueName: \"kubernetes.io/projected/50cf3400-fb73-4038-b616-2d3559aaf784-kube-api-access-vmkg5\") pod \"certified-operators-sqfvs\" (UID: \"50cf3400-fb73-4038-b616-2d3559aaf784\") " pod="openshift-marketplace/certified-operators-sqfvs" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.465243 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9cznf" Feb 28 04:37:25 crc kubenswrapper[5014]: E0228 04:37:25.466292 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:25.966255329 +0000 UTC m=+234.636381409 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.503246 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bl64c"] Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.566597 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50cf3400-fb73-4038-b616-2d3559aaf784-catalog-content\") pod \"certified-operators-sqfvs\" (UID: \"50cf3400-fb73-4038-b616-2d3559aaf784\") " pod="openshift-marketplace/certified-operators-sqfvs" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.566650 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmkg5\" (UniqueName: \"kubernetes.io/projected/50cf3400-fb73-4038-b616-2d3559aaf784-kube-api-access-vmkg5\") pod \"certified-operators-sqfvs\" (UID: \"50cf3400-fb73-4038-b616-2d3559aaf784\") " pod="openshift-marketplace/certified-operators-sqfvs" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.566708 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.566741 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50cf3400-fb73-4038-b616-2d3559aaf784-utilities\") pod \"certified-operators-sqfvs\" (UID: \"50cf3400-fb73-4038-b616-2d3559aaf784\") " pod="openshift-marketplace/certified-operators-sqfvs" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.567266 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50cf3400-fb73-4038-b616-2d3559aaf784-utilities\") pod \"certified-operators-sqfvs\" (UID: \"50cf3400-fb73-4038-b616-2d3559aaf784\") " pod="openshift-marketplace/certified-operators-sqfvs" Feb 28 04:37:25 crc kubenswrapper[5014]: E0228 04:37:25.567569 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:26.067554985 +0000 UTC m=+234.737680895 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.567626 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50cf3400-fb73-4038-b616-2d3559aaf784-catalog-content\") pod \"certified-operators-sqfvs\" (UID: \"50cf3400-fb73-4038-b616-2d3559aaf784\") " pod="openshift-marketplace/certified-operators-sqfvs" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.613778 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt"] Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.618638 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmkg5\" (UniqueName: \"kubernetes.io/projected/50cf3400-fb73-4038-b616-2d3559aaf784-kube-api-access-vmkg5\") pod \"certified-operators-sqfvs\" (UID: \"50cf3400-fb73-4038-b616-2d3559aaf784\") " pod="openshift-marketplace/certified-operators-sqfvs" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.647362 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" event={"ID":"a3d454ff-aef1-427e-a572-c21562fb3659","Type":"ContainerStarted","Data":"083d0503b045a3fea9cd96fb2f0361116b7da46d26b1b8feccad39432a776572"} Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.647423 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" event={"ID":"a3d454ff-aef1-427e-a572-c21562fb3659","Type":"ContainerStarted","Data":"dcfc58ef5beabf96ad5f5143af6ad8ddb9f9bbbbffa2e125f08aef4f8d1d8b96"} Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.648988 5014 generic.go:334] "Generic (PLEG): container finished" podID="91c20ddd-76d6-4e47-a24e-ec090ff039de" containerID="f2d472137bc70dded2e55dee4388671fe35e0e9737e1a729e2dfdfd32b64eb66" exitCode=0 Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.649603 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-hbznb" event={"ID":"91c20ddd-76d6-4e47-a24e-ec090ff039de","Type":"ContainerDied","Data":"f2d472137bc70dded2e55dee4388671fe35e0e9737e1a729e2dfdfd32b64eb66"} Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.650768 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" podUID="b9aa43b2-c924-4e9d-8c38-cf39ee922d3e" containerName="route-controller-manager" containerID="cri-o://17e57e3bc82aa20ea8544f04571d937a23451ef206f456824d959a2c83a171a0" gracePeriod=30 Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.669279 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.684637 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqfvs" Feb 28 04:37:25 crc kubenswrapper[5014]: E0228 04:37:25.685065 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:26.18502452 +0000 UTC m=+234.855150420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.742971 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kx627"] Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.745058 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kx627" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.775146 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:25 crc kubenswrapper[5014]: E0228 04:37:25.780013 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:26.279992915 +0000 UTC m=+234.950118815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.830007 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r5h8g"] Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.831786 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5h8g" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.848123 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kx627"] Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.879447 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.879663 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd99ec6a-5237-42f9-81ad-bd813d262c6d-catalog-content\") pod \"community-operators-kx627\" (UID: \"bd99ec6a-5237-42f9-81ad-bd813d262c6d\") " pod="openshift-marketplace/community-operators-kx627" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.879704 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvjgv\" (UniqueName: \"kubernetes.io/projected/bd99ec6a-5237-42f9-81ad-bd813d262c6d-kube-api-access-nvjgv\") pod \"community-operators-kx627\" (UID: \"bd99ec6a-5237-42f9-81ad-bd813d262c6d\") " pod="openshift-marketplace/community-operators-kx627" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.879822 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd99ec6a-5237-42f9-81ad-bd813d262c6d-utilities\") pod \"community-operators-kx627\" (UID: \"bd99ec6a-5237-42f9-81ad-bd813d262c6d\") " pod="openshift-marketplace/community-operators-kx627" Feb 28 04:37:25 crc kubenswrapper[5014]: E0228 04:37:25.879950 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:26.379927842 +0000 UTC m=+235.050053752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.900455 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r5h8g"] Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.942002 5014 patch_prober.go:28] interesting pod/router-default-5444994796-h4z55 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 04:37:25 crc kubenswrapper[5014]: [-]has-synced failed: reason withheld Feb 28 04:37:25 crc kubenswrapper[5014]: [+]process-running ok Feb 28 04:37:25 crc kubenswrapper[5014]: healthz check failed Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.942084 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h4z55" podUID="6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.986993 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.987047 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bdb5d29-5a4c-4358-a276-58efd08a8655-catalog-content\") pod \"certified-operators-r5h8g\" (UID: \"7bdb5d29-5a4c-4358-a276-58efd08a8655\") " pod="openshift-marketplace/certified-operators-r5h8g" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.987076 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqtc5\" (UniqueName: \"kubernetes.io/projected/7bdb5d29-5a4c-4358-a276-58efd08a8655-kube-api-access-rqtc5\") pod \"certified-operators-r5h8g\" (UID: \"7bdb5d29-5a4c-4358-a276-58efd08a8655\") " pod="openshift-marketplace/certified-operators-r5h8g" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.987116 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd99ec6a-5237-42f9-81ad-bd813d262c6d-utilities\") pod \"community-operators-kx627\" (UID: \"bd99ec6a-5237-42f9-81ad-bd813d262c6d\") " pod="openshift-marketplace/community-operators-kx627" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.987156 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd99ec6a-5237-42f9-81ad-bd813d262c6d-catalog-content\") pod \"community-operators-kx627\" (UID: \"bd99ec6a-5237-42f9-81ad-bd813d262c6d\") " pod="openshift-marketplace/community-operators-kx627" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.987178 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvjgv\" (UniqueName: \"kubernetes.io/projected/bd99ec6a-5237-42f9-81ad-bd813d262c6d-kube-api-access-nvjgv\") pod \"community-operators-kx627\" (UID: \"bd99ec6a-5237-42f9-81ad-bd813d262c6d\") " pod="openshift-marketplace/community-operators-kx627" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.987217 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bdb5d29-5a4c-4358-a276-58efd08a8655-utilities\") pod \"certified-operators-r5h8g\" (UID: \"7bdb5d29-5a4c-4358-a276-58efd08a8655\") " pod="openshift-marketplace/certified-operators-r5h8g" Feb 28 04:37:25 crc kubenswrapper[5014]: E0228 04:37:25.987542 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:26.487528527 +0000 UTC m=+235.157654437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.988097 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd99ec6a-5237-42f9-81ad-bd813d262c6d-utilities\") pod \"community-operators-kx627\" (UID: \"bd99ec6a-5237-42f9-81ad-bd813d262c6d\") " pod="openshift-marketplace/community-operators-kx627" Feb 28 04:37:25 crc kubenswrapper[5014]: I0228 04:37:25.988396 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd99ec6a-5237-42f9-81ad-bd813d262c6d-catalog-content\") pod \"community-operators-kx627\" (UID: \"bd99ec6a-5237-42f9-81ad-bd813d262c6d\") " pod="openshift-marketplace/community-operators-kx627" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.058052 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvjgv\" (UniqueName: \"kubernetes.io/projected/bd99ec6a-5237-42f9-81ad-bd813d262c6d-kube-api-access-nvjgv\") pod \"community-operators-kx627\" (UID: \"bd99ec6a-5237-42f9-81ad-bd813d262c6d\") " pod="openshift-marketplace/community-operators-kx627" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.090446 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.090860 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bdb5d29-5a4c-4358-a276-58efd08a8655-utilities\") pod \"certified-operators-r5h8g\" (UID: \"7bdb5d29-5a4c-4358-a276-58efd08a8655\") " pod="openshift-marketplace/certified-operators-r5h8g" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.090920 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bdb5d29-5a4c-4358-a276-58efd08a8655-catalog-content\") pod \"certified-operators-r5h8g\" (UID: \"7bdb5d29-5a4c-4358-a276-58efd08a8655\") " pod="openshift-marketplace/certified-operators-r5h8g" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.090959 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqtc5\" (UniqueName: \"kubernetes.io/projected/7bdb5d29-5a4c-4358-a276-58efd08a8655-kube-api-access-rqtc5\") pod \"certified-operators-r5h8g\" (UID: \"7bdb5d29-5a4c-4358-a276-58efd08a8655\") " pod="openshift-marketplace/certified-operators-r5h8g" Feb 28 04:37:26 crc kubenswrapper[5014]: E0228 04:37:26.091068 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:26.591050736 +0000 UTC m=+235.261176646 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.091450 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bdb5d29-5a4c-4358-a276-58efd08a8655-utilities\") pod \"certified-operators-r5h8g\" (UID: \"7bdb5d29-5a4c-4358-a276-58efd08a8655\") " pod="openshift-marketplace/certified-operators-r5h8g" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.091499 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bdb5d29-5a4c-4358-a276-58efd08a8655-catalog-content\") pod \"certified-operators-r5h8g\" (UID: \"7bdb5d29-5a4c-4358-a276-58efd08a8655\") " pod="openshift-marketplace/certified-operators-r5h8g" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.136535 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqtc5\" (UniqueName: \"kubernetes.io/projected/7bdb5d29-5a4c-4358-a276-58efd08a8655-kube-api-access-rqtc5\") pod \"certified-operators-r5h8g\" (UID: \"7bdb5d29-5a4c-4358-a276-58efd08a8655\") " pod="openshift-marketplace/certified-operators-r5h8g" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.136939 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kx627" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.168495 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5h8g" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.191713 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:26 crc kubenswrapper[5014]: E0228 04:37:26.192108 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:26.692093124 +0000 UTC m=+235.362219034 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.207674 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.209294 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.219062 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.219360 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.246173 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.292603 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9cznf"] Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.293361 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.293582 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c95e257-4b60-4a76-9849-7c5daa13b539-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6c95e257-4b60-4a76-9849-7c5daa13b539\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.293672 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c95e257-4b60-4a76-9849-7c5daa13b539-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6c95e257-4b60-4a76-9849-7c5daa13b539\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 04:37:26 crc kubenswrapper[5014]: E0228 04:37:26.293843 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-28 04:37:26.793823021 +0000 UTC m=+235.463948921 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.380210 5014 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.395288 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c95e257-4b60-4a76-9849-7c5daa13b539-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6c95e257-4b60-4a76-9849-7c5daa13b539\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.395386 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.395405 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c95e257-4b60-4a76-9849-7c5daa13b539-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6c95e257-4b60-4a76-9849-7c5daa13b539\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.395490 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c95e257-4b60-4a76-9849-7c5daa13b539-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"6c95e257-4b60-4a76-9849-7c5daa13b539\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 04:37:26 crc kubenswrapper[5014]: E0228 04:37:26.396119 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-28 04:37:26.896107145 +0000 UTC m=+235.566233055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-sm9r4" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.408981 5014 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-28T04:37:26.380243155Z","Handler":null,"Name":""} Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.426101 5014 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.426155 5014 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.441272 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c95e257-4b60-4a76-9849-7c5daa13b539-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"6c95e257-4b60-4a76-9849-7c5daa13b539\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.458244 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.501198 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.501326 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e-client-ca\") pod \"b9aa43b2-c924-4e9d-8c38-cf39ee922d3e\" (UID: \"b9aa43b2-c924-4e9d-8c38-cf39ee922d3e\") " Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.501361 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxltc\" (UniqueName: \"kubernetes.io/projected/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e-kube-api-access-dxltc\") pod \"b9aa43b2-c924-4e9d-8c38-cf39ee922d3e\" (UID: \"b9aa43b2-c924-4e9d-8c38-cf39ee922d3e\") " Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.501398 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e-config\") pod \"b9aa43b2-c924-4e9d-8c38-cf39ee922d3e\" (UID: \"b9aa43b2-c924-4e9d-8c38-cf39ee922d3e\") " Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.501429 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e-serving-cert\") pod \"b9aa43b2-c924-4e9d-8c38-cf39ee922d3e\" (UID: \"b9aa43b2-c924-4e9d-8c38-cf39ee922d3e\") " Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.503704 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sqfvs"] Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.508998 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e-config" (OuterVolumeSpecName: "config") pod "b9aa43b2-c924-4e9d-8c38-cf39ee922d3e" (UID: "b9aa43b2-c924-4e9d-8c38-cf39ee922d3e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.509254 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e-client-ca" (OuterVolumeSpecName: "client-ca") pod "b9aa43b2-c924-4e9d-8c38-cf39ee922d3e" (UID: "b9aa43b2-c924-4e9d-8c38-cf39ee922d3e"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.521757 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.522692 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b9aa43b2-c924-4e9d-8c38-cf39ee922d3e" (UID: "b9aa43b2-c924-4e9d-8c38-cf39ee922d3e"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.526308 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e-kube-api-access-dxltc" (OuterVolumeSpecName: "kube-api-access-dxltc") pod "b9aa43b2-c924-4e9d-8c38-cf39ee922d3e" (UID: "b9aa43b2-c924-4e9d-8c38-cf39ee922d3e"). InnerVolumeSpecName "kube-api-access-dxltc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:37:26 crc kubenswrapper[5014]: W0228 04:37:26.530586 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50cf3400_fb73_4038_b616_2d3559aaf784.slice/crio-30ba94d403b5324b549cabf95316902b5413cb95ee203dc6325869fceef711ac WatchSource:0}: Error finding container 30ba94d403b5324b549cabf95316902b5413cb95ee203dc6325869fceef711ac: Status 404 returned error can't find the container with id 30ba94d403b5324b549cabf95316902b5413cb95ee203dc6325869fceef711ac Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.563556 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.603071 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.603145 5014 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.603158 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxltc\" (UniqueName: \"kubernetes.io/projected/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e-kube-api-access-dxltc\") on node \"crc\" DevicePath \"\"" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.603183 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.603214 5014 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.612637 5014 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.612685 5014 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.716061 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-sm9r4\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.752722 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kx627"] Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.753702 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" event={"ID":"a3d454ff-aef1-427e-a572-c21562fb3659","Type":"ContainerStarted","Data":"3da9c41c51f572f02b43e138ce8b4600d8b69045046e1be49ba21c9a229b17bd"} Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.780729 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9cznf" event={"ID":"52079806-fc0c-4852-8150-0123d376c1b2","Type":"ContainerStarted","Data":"029101b17b47ef3f330f6fe6a57689cf3e21070e70249db678506708b50cea87"} Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.799428 5014 ???:1] "http: TLS handshake error from 192.168.126.11:47446: no serving certificate available for the kubelet" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.806302 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqfvs" event={"ID":"50cf3400-fb73-4038-b616-2d3559aaf784","Type":"ContainerStarted","Data":"30ba94d403b5324b549cabf95316902b5413cb95ee203dc6325869fceef711ac"} Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.822467 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-fxcmt" podStartSLOduration=11.822429447 podStartE2EDuration="11.822429447s" podCreationTimestamp="2026-02-28 04:37:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:26.790463469 +0000 UTC m=+235.460589379" watchObservedRunningTime="2026-02-28 04:37:26.822429447 +0000 UTC m=+235.492555357" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.837288 5014 generic.go:334] "Generic (PLEG): container finished" podID="b9aa43b2-c924-4e9d-8c38-cf39ee922d3e" containerID="17e57e3bc82aa20ea8544f04571d937a23451ef206f456824d959a2c83a171a0" exitCode=0 Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.837627 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" podUID="797b2165-10cf-4886-a106-7f1010672030" containerName="controller-manager" containerID="cri-o://dba19494527a57c18b2cba26233275af18bfd66b45a1e4cf3ab20e9732be14a4" gracePeriod=30 Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.838082 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.838722 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" event={"ID":"b9aa43b2-c924-4e9d-8c38-cf39ee922d3e","Type":"ContainerDied","Data":"17e57e3bc82aa20ea8544f04571d937a23451ef206f456824d959a2c83a171a0"} Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.838778 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt" event={"ID":"b9aa43b2-c924-4e9d-8c38-cf39ee922d3e","Type":"ContainerDied","Data":"30424cb4daced262e91c198603e6243713c3d6382436bd76be43bcdc052f2f9b"} Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.838799 5014 scope.go:117] "RemoveContainer" containerID="17e57e3bc82aa20ea8544f04571d937a23451ef206f456824d959a2c83a171a0" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.864239 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z"] Feb 28 04:37:26 crc kubenswrapper[5014]: E0228 04:37:26.864557 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9aa43b2-c924-4e9d-8c38-cf39ee922d3e" containerName="route-controller-manager" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.864571 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9aa43b2-c924-4e9d-8c38-cf39ee922d3e" containerName="route-controller-manager" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.864777 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9aa43b2-c924-4e9d-8c38-cf39ee922d3e" containerName="route-controller-manager" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.865277 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.871547 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z"] Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.871739 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.873770 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.874034 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.874168 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.874311 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.875268 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.883094 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r5h8g"] Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.910776 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b36d7a0-d5d2-460c-a0a6-03b165ae5740-client-ca\") pod \"route-controller-manager-5dccb57d9c-p9k9z\" (UID: \"0b36d7a0-d5d2-460c-a0a6-03b165ae5740\") " pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.910856 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjp82\" (UniqueName: \"kubernetes.io/projected/0b36d7a0-d5d2-460c-a0a6-03b165ae5740-kube-api-access-fjp82\") pod \"route-controller-manager-5dccb57d9c-p9k9z\" (UID: \"0b36d7a0-d5d2-460c-a0a6-03b165ae5740\") " pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.910909 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b36d7a0-d5d2-460c-a0a6-03b165ae5740-serving-cert\") pod \"route-controller-manager-5dccb57d9c-p9k9z\" (UID: \"0b36d7a0-d5d2-460c-a0a6-03b165ae5740\") " pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.910960 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b36d7a0-d5d2-460c-a0a6-03b165ae5740-config\") pod \"route-controller-manager-5dccb57d9c-p9k9z\" (UID: \"0b36d7a0-d5d2-460c-a0a6-03b165ae5740\") " pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.912130 5014 scope.go:117] "RemoveContainer" containerID="17e57e3bc82aa20ea8544f04571d937a23451ef206f456824d959a2c83a171a0" Feb 28 04:37:26 crc kubenswrapper[5014]: E0228 04:37:26.912523 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17e57e3bc82aa20ea8544f04571d937a23451ef206f456824d959a2c83a171a0\": container with ID starting with 17e57e3bc82aa20ea8544f04571d937a23451ef206f456824d959a2c83a171a0 not found: ID does not exist" containerID="17e57e3bc82aa20ea8544f04571d937a23451ef206f456824d959a2c83a171a0" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.912565 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17e57e3bc82aa20ea8544f04571d937a23451ef206f456824d959a2c83a171a0"} err="failed to get container status \"17e57e3bc82aa20ea8544f04571d937a23451ef206f456824d959a2c83a171a0\": rpc error: code = NotFound desc = could not find container \"17e57e3bc82aa20ea8544f04571d937a23451ef206f456824d959a2c83a171a0\": container with ID starting with 17e57e3bc82aa20ea8544f04571d937a23451ef206f456824d959a2c83a171a0 not found: ID does not exist" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.958149 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.963046 5014 patch_prober.go:28] interesting pod/router-default-5444994796-h4z55 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 04:37:26 crc kubenswrapper[5014]: [-]has-synced failed: reason withheld Feb 28 04:37:26 crc kubenswrapper[5014]: [+]process-running ok Feb 28 04:37:26 crc kubenswrapper[5014]: healthz check failed Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.963133 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h4z55" podUID="6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.967512 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-488hv" Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.971538 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt"] Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.975126 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-qmrxt"] Feb 28 04:37:26 crc kubenswrapper[5014]: I0228 04:37:26.992262 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.019350 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b36d7a0-d5d2-460c-a0a6-03b165ae5740-client-ca\") pod \"route-controller-manager-5dccb57d9c-p9k9z\" (UID: \"0b36d7a0-d5d2-460c-a0a6-03b165ae5740\") " pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.019386 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjp82\" (UniqueName: \"kubernetes.io/projected/0b36d7a0-d5d2-460c-a0a6-03b165ae5740-kube-api-access-fjp82\") pod \"route-controller-manager-5dccb57d9c-p9k9z\" (UID: \"0b36d7a0-d5d2-460c-a0a6-03b165ae5740\") " pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.033472 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b36d7a0-d5d2-460c-a0a6-03b165ae5740-client-ca\") pod \"route-controller-manager-5dccb57d9c-p9k9z\" (UID: \"0b36d7a0-d5d2-460c-a0a6-03b165ae5740\") " pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.043163 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b36d7a0-d5d2-460c-a0a6-03b165ae5740-serving-cert\") pod \"route-controller-manager-5dccb57d9c-p9k9z\" (UID: \"0b36d7a0-d5d2-460c-a0a6-03b165ae5740\") " pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.043343 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b36d7a0-d5d2-460c-a0a6-03b165ae5740-config\") pod \"route-controller-manager-5dccb57d9c-p9k9z\" (UID: \"0b36d7a0-d5d2-460c-a0a6-03b165ae5740\") " pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.045105 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b36d7a0-d5d2-460c-a0a6-03b165ae5740-config\") pod \"route-controller-manager-5dccb57d9c-p9k9z\" (UID: \"0b36d7a0-d5d2-460c-a0a6-03b165ae5740\") " pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.065322 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b36d7a0-d5d2-460c-a0a6-03b165ae5740-serving-cert\") pod \"route-controller-manager-5dccb57d9c-p9k9z\" (UID: \"0b36d7a0-d5d2-460c-a0a6-03b165ae5740\") " pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.075160 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjp82\" (UniqueName: \"kubernetes.io/projected/0b36d7a0-d5d2-460c-a0a6-03b165ae5740-kube-api-access-fjp82\") pod \"route-controller-manager-5dccb57d9c-p9k9z\" (UID: \"0b36d7a0-d5d2-460c-a0a6-03b165ae5740\") " pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.204542 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-hbznb" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.210118 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.342738 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-npdf6"] Feb 28 04:37:27 crc kubenswrapper[5014]: E0228 04:37:27.343048 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91c20ddd-76d6-4e47-a24e-ec090ff039de" containerName="collect-profiles" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.343063 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="91c20ddd-76d6-4e47-a24e-ec090ff039de" containerName="collect-profiles" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.343177 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="91c20ddd-76d6-4e47-a24e-ec090ff039de" containerName="collect-profiles" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.347597 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npdf6" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.349689 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.350356 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91c20ddd-76d6-4e47-a24e-ec090ff039de-secret-volume\") pod \"91c20ddd-76d6-4e47-a24e-ec090ff039de\" (UID: \"91c20ddd-76d6-4e47-a24e-ec090ff039de\") " Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.350471 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx42m\" (UniqueName: \"kubernetes.io/projected/91c20ddd-76d6-4e47-a24e-ec090ff039de-kube-api-access-hx42m\") pod \"91c20ddd-76d6-4e47-a24e-ec090ff039de\" (UID: \"91c20ddd-76d6-4e47-a24e-ec090ff039de\") " Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.350605 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91c20ddd-76d6-4e47-a24e-ec090ff039de-config-volume\") pod \"91c20ddd-76d6-4e47-a24e-ec090ff039de\" (UID: \"91c20ddd-76d6-4e47-a24e-ec090ff039de\") " Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.354176 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91c20ddd-76d6-4e47-a24e-ec090ff039de-config-volume" (OuterVolumeSpecName: "config-volume") pod "91c20ddd-76d6-4e47-a24e-ec090ff039de" (UID: "91c20ddd-76d6-4e47-a24e-ec090ff039de"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.361573 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91c20ddd-76d6-4e47-a24e-ec090ff039de-kube-api-access-hx42m" (OuterVolumeSpecName: "kube-api-access-hx42m") pod "91c20ddd-76d6-4e47-a24e-ec090ff039de" (UID: "91c20ddd-76d6-4e47-a24e-ec090ff039de"). InnerVolumeSpecName "kube-api-access-hx42m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.367031 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-npdf6"] Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.376979 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91c20ddd-76d6-4e47-a24e-ec090ff039de-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "91c20ddd-76d6-4e47-a24e-ec090ff039de" (UID: "91c20ddd-76d6-4e47-a24e-ec090ff039de"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.396939 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.432439 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.454612 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr7z5\" (UniqueName: \"kubernetes.io/projected/8a00f74f-e858-42cc-b882-492afd45684d-kube-api-access-tr7z5\") pod \"redhat-marketplace-npdf6\" (UID: \"8a00f74f-e858-42cc-b882-492afd45684d\") " pod="openshift-marketplace/redhat-marketplace-npdf6" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.454751 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a00f74f-e858-42cc-b882-492afd45684d-utilities\") pod \"redhat-marketplace-npdf6\" (UID: \"8a00f74f-e858-42cc-b882-492afd45684d\") " pod="openshift-marketplace/redhat-marketplace-npdf6" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.454788 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a00f74f-e858-42cc-b882-492afd45684d-catalog-content\") pod \"redhat-marketplace-npdf6\" (UID: \"8a00f74f-e858-42cc-b882-492afd45684d\") " pod="openshift-marketplace/redhat-marketplace-npdf6" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.454846 5014 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91c20ddd-76d6-4e47-a24e-ec090ff039de-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.454857 5014 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91c20ddd-76d6-4e47-a24e-ec090ff039de-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.454866 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx42m\" (UniqueName: \"kubernetes.io/projected/91c20ddd-76d6-4e47-a24e-ec090ff039de-kube-api-access-hx42m\") on node \"crc\" DevicePath \"\"" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.456917 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sm9r4"] Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.558265 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/797b2165-10cf-4886-a106-7f1010672030-proxy-ca-bundles\") pod \"797b2165-10cf-4886-a106-7f1010672030\" (UID: \"797b2165-10cf-4886-a106-7f1010672030\") " Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.558349 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fl2p8\" (UniqueName: \"kubernetes.io/projected/797b2165-10cf-4886-a106-7f1010672030-kube-api-access-fl2p8\") pod \"797b2165-10cf-4886-a106-7f1010672030\" (UID: \"797b2165-10cf-4886-a106-7f1010672030\") " Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.558390 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/797b2165-10cf-4886-a106-7f1010672030-config\") pod \"797b2165-10cf-4886-a106-7f1010672030\" (UID: \"797b2165-10cf-4886-a106-7f1010672030\") " Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.558421 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/797b2165-10cf-4886-a106-7f1010672030-client-ca\") pod \"797b2165-10cf-4886-a106-7f1010672030\" (UID: \"797b2165-10cf-4886-a106-7f1010672030\") " Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.558488 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/797b2165-10cf-4886-a106-7f1010672030-serving-cert\") pod \"797b2165-10cf-4886-a106-7f1010672030\" (UID: \"797b2165-10cf-4886-a106-7f1010672030\") " Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.558746 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a00f74f-e858-42cc-b882-492afd45684d-utilities\") pod \"redhat-marketplace-npdf6\" (UID: \"8a00f74f-e858-42cc-b882-492afd45684d\") " pod="openshift-marketplace/redhat-marketplace-npdf6" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.558787 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a00f74f-e858-42cc-b882-492afd45684d-catalog-content\") pod \"redhat-marketplace-npdf6\" (UID: \"8a00f74f-e858-42cc-b882-492afd45684d\") " pod="openshift-marketplace/redhat-marketplace-npdf6" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.558829 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr7z5\" (UniqueName: \"kubernetes.io/projected/8a00f74f-e858-42cc-b882-492afd45684d-kube-api-access-tr7z5\") pod \"redhat-marketplace-npdf6\" (UID: \"8a00f74f-e858-42cc-b882-492afd45684d\") " pod="openshift-marketplace/redhat-marketplace-npdf6" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.561332 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/797b2165-10cf-4886-a106-7f1010672030-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "797b2165-10cf-4886-a106-7f1010672030" (UID: "797b2165-10cf-4886-a106-7f1010672030"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.562286 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/797b2165-10cf-4886-a106-7f1010672030-config" (OuterVolumeSpecName: "config") pod "797b2165-10cf-4886-a106-7f1010672030" (UID: "797b2165-10cf-4886-a106-7f1010672030"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.562605 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a00f74f-e858-42cc-b882-492afd45684d-utilities\") pod \"redhat-marketplace-npdf6\" (UID: \"8a00f74f-e858-42cc-b882-492afd45684d\") " pod="openshift-marketplace/redhat-marketplace-npdf6" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.562648 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a00f74f-e858-42cc-b882-492afd45684d-catalog-content\") pod \"redhat-marketplace-npdf6\" (UID: \"8a00f74f-e858-42cc-b882-492afd45684d\") " pod="openshift-marketplace/redhat-marketplace-npdf6" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.567341 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/797b2165-10cf-4886-a106-7f1010672030-client-ca" (OuterVolumeSpecName: "client-ca") pod "797b2165-10cf-4886-a106-7f1010672030" (UID: "797b2165-10cf-4886-a106-7f1010672030"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.569327 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/797b2165-10cf-4886-a106-7f1010672030-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "797b2165-10cf-4886-a106-7f1010672030" (UID: "797b2165-10cf-4886-a106-7f1010672030"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.583396 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/797b2165-10cf-4886-a106-7f1010672030-kube-api-access-fl2p8" (OuterVolumeSpecName: "kube-api-access-fl2p8") pod "797b2165-10cf-4886-a106-7f1010672030" (UID: "797b2165-10cf-4886-a106-7f1010672030"). InnerVolumeSpecName "kube-api-access-fl2p8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.609157 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr7z5\" (UniqueName: \"kubernetes.io/projected/8a00f74f-e858-42cc-b882-492afd45684d-kube-api-access-tr7z5\") pod \"redhat-marketplace-npdf6\" (UID: \"8a00f74f-e858-42cc-b882-492afd45684d\") " pod="openshift-marketplace/redhat-marketplace-npdf6" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.663876 5014 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/797b2165-10cf-4886-a106-7f1010672030-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.663927 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fl2p8\" (UniqueName: \"kubernetes.io/projected/797b2165-10cf-4886-a106-7f1010672030-kube-api-access-fl2p8\") on node \"crc\" DevicePath \"\"" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.663944 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/797b2165-10cf-4886-a106-7f1010672030-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.663955 5014 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/797b2165-10cf-4886-a106-7f1010672030-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.663967 5014 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/797b2165-10cf-4886-a106-7f1010672030-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.718223 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npdf6" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.739698 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-64ktb"] Feb 28 04:37:27 crc kubenswrapper[5014]: E0228 04:37:27.740044 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="797b2165-10cf-4886-a106-7f1010672030" containerName="controller-manager" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.740066 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="797b2165-10cf-4886-a106-7f1010672030" containerName="controller-manager" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.740224 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="797b2165-10cf-4886-a106-7f1010672030" containerName="controller-manager" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.741762 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-64ktb" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.747745 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-64ktb"] Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.828282 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-s68g5" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.841952 5014 patch_prober.go:28] interesting pod/downloads-7954f5f757-cghpw container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.842022 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-cghpw" podUID="9f80824d-7fc7-44e3-982c-2856a99523be" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.842090 5014 patch_prober.go:28] interesting pod/downloads-7954f5f757-cghpw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.842117 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cghpw" podUID="9f80824d-7fc7-44e3-982c-2856a99523be" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.881413 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z"] Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.884886 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kz2l\" (UniqueName: \"kubernetes.io/projected/d19bc223-12d6-45a9-87de-31ec3b6d9557-kube-api-access-5kz2l\") pod \"redhat-marketplace-64ktb\" (UID: \"d19bc223-12d6-45a9-87de-31ec3b6d9557\") " pod="openshift-marketplace/redhat-marketplace-64ktb" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.886370 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d19bc223-12d6-45a9-87de-31ec3b6d9557-catalog-content\") pod \"redhat-marketplace-64ktb\" (UID: \"d19bc223-12d6-45a9-87de-31ec3b6d9557\") " pod="openshift-marketplace/redhat-marketplace-64ktb" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.886464 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d19bc223-12d6-45a9-87de-31ec3b6d9557-utilities\") pod \"redhat-marketplace-64ktb\" (UID: \"d19bc223-12d6-45a9-87de-31ec3b6d9557\") " pod="openshift-marketplace/redhat-marketplace-64ktb" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.887730 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.891391 5014 generic.go:334] "Generic (PLEG): container finished" podID="52079806-fc0c-4852-8150-0123d376c1b2" containerID="3887c3314de07d5bc5a02a84043f4e0063c5a18cc9918fcc61bcbd542efb30b9" exitCode=0 Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.891678 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9cznf" event={"ID":"52079806-fc0c-4852-8150-0123d376c1b2","Type":"ContainerDied","Data":"3887c3314de07d5bc5a02a84043f4e0063c5a18cc9918fcc61bcbd542efb30b9"} Feb 28 04:37:27 crc kubenswrapper[5014]: W0228 04:37:27.895549 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b36d7a0_d5d2_460c_a0a6_03b165ae5740.slice/crio-7dcd765f9c3c77ae21fe13b8c6346f40d25742483055a91a6d107e58e2d5ad1d WatchSource:0}: Error finding container 7dcd765f9c3c77ae21fe13b8c6346f40d25742483055a91a6d107e58e2d5ad1d: Status 404 returned error can't find the container with id 7dcd765f9c3c77ae21fe13b8c6346f40d25742483055a91a6d107e58e2d5ad1d Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.896274 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-t8497" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.901187 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6c95e257-4b60-4a76-9849-7c5daa13b539","Type":"ContainerStarted","Data":"e807e24de1c37e73d79f713635444760b45c46446f1a6e929fcce6188331ff98"} Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.915673 5014 generic.go:334] "Generic (PLEG): container finished" podID="7bdb5d29-5a4c-4358-a276-58efd08a8655" containerID="635d7b700cf15c02191534cb5876fc76185e4ccb324da66c05d6eca14ecc191b" exitCode=0 Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.915767 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5h8g" event={"ID":"7bdb5d29-5a4c-4358-a276-58efd08a8655","Type":"ContainerDied","Data":"635d7b700cf15c02191534cb5876fc76185e4ccb324da66c05d6eca14ecc191b"} Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.915845 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5h8g" event={"ID":"7bdb5d29-5a4c-4358-a276-58efd08a8655","Type":"ContainerStarted","Data":"9f2ff58fccc402d215407db9b6b4cc257fab3bd62fe055aa17cd8eafe6508e6d"} Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.917509 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-hbznb" event={"ID":"91c20ddd-76d6-4e47-a24e-ec090ff039de","Type":"ContainerDied","Data":"35ac46e9a6ed6799fad217ed58b3bd60d5f9de7e04b95ff84a8d47cc5ab776c7"} Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.917567 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35ac46e9a6ed6799fad217ed58b3bd60d5f9de7e04b95ff84a8d47cc5ab776c7" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.918699 5014 generic.go:334] "Generic (PLEG): container finished" podID="797b2165-10cf-4886-a106-7f1010672030" containerID="dba19494527a57c18b2cba26233275af18bfd66b45a1e4cf3ab20e9732be14a4" exitCode=0 Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.918744 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" event={"ID":"797b2165-10cf-4886-a106-7f1010672030","Type":"ContainerDied","Data":"dba19494527a57c18b2cba26233275af18bfd66b45a1e4cf3ab20e9732be14a4"} Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.918834 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" event={"ID":"797b2165-10cf-4886-a106-7f1010672030","Type":"ContainerDied","Data":"818a2ef10758e585a6e52ca1c37fe1ee86ad46b6d40accb532ccd576d45d3a32"} Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.918860 5014 scope.go:117] "RemoveContainer" containerID="dba19494527a57c18b2cba26233275af18bfd66b45a1e4cf3ab20e9732be14a4" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.919059 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-bl64c" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.922234 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537550-hbznb" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.929353 5014 patch_prober.go:28] interesting pod/router-default-5444994796-h4z55 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 04:37:27 crc kubenswrapper[5014]: [-]has-synced failed: reason withheld Feb 28 04:37:27 crc kubenswrapper[5014]: [+]process-running ok Feb 28 04:37:27 crc kubenswrapper[5014]: healthz check failed Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.929423 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h4z55" podUID="6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.936166 5014 generic.go:334] "Generic (PLEG): container finished" podID="50cf3400-fb73-4038-b616-2d3559aaf784" containerID="c58870833a4b295eea0a120f2f28e7b596e36a9798e90855009cca95fe301cae" exitCode=0 Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.937383 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqfvs" event={"ID":"50cf3400-fb73-4038-b616-2d3559aaf784","Type":"ContainerDied","Data":"c58870833a4b295eea0a120f2f28e7b596e36a9798e90855009cca95fe301cae"} Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.943956 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" event={"ID":"8bf1ab3c-8003-4a48-b248-30282df03e95","Type":"ContainerStarted","Data":"086fbbe28e04b9e93dd7ac8173bbf7a394ce52c08b15c706dd7f807121ae923e"} Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.944007 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" event={"ID":"8bf1ab3c-8003-4a48-b248-30282df03e95","Type":"ContainerStarted","Data":"b5e662359242bbec23461b918802ed6766e5f79840cd2b10863a0b9a225910dc"} Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.944855 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.985362 5014 generic.go:334] "Generic (PLEG): container finished" podID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" containerID="c84ecce3f7b84faaf2e273cce8aca402b65f0e5b1c5afc3b968a367f67a39184" exitCode=0 Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.987251 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d19bc223-12d6-45a9-87de-31ec3b6d9557-utilities\") pod \"redhat-marketplace-64ktb\" (UID: \"d19bc223-12d6-45a9-87de-31ec3b6d9557\") " pod="openshift-marketplace/redhat-marketplace-64ktb" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.987312 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kz2l\" (UniqueName: \"kubernetes.io/projected/d19bc223-12d6-45a9-87de-31ec3b6d9557-kube-api-access-5kz2l\") pod \"redhat-marketplace-64ktb\" (UID: \"d19bc223-12d6-45a9-87de-31ec3b6d9557\") " pod="openshift-marketplace/redhat-marketplace-64ktb" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.987557 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d19bc223-12d6-45a9-87de-31ec3b6d9557-catalog-content\") pod \"redhat-marketplace-64ktb\" (UID: \"d19bc223-12d6-45a9-87de-31ec3b6d9557\") " pod="openshift-marketplace/redhat-marketplace-64ktb" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.986636 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kx627" event={"ID":"bd99ec6a-5237-42f9-81ad-bd813d262c6d","Type":"ContainerDied","Data":"c84ecce3f7b84faaf2e273cce8aca402b65f0e5b1c5afc3b968a367f67a39184"} Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.988085 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kx627" event={"ID":"bd99ec6a-5237-42f9-81ad-bd813d262c6d","Type":"ContainerStarted","Data":"8a949a74e97dc77d95dfd47164d840b7c3750f8d095374a4a90fbd8d574e3a23"} Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.988763 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d19bc223-12d6-45a9-87de-31ec3b6d9557-catalog-content\") pod \"redhat-marketplace-64ktb\" (UID: \"d19bc223-12d6-45a9-87de-31ec3b6d9557\") " pod="openshift-marketplace/redhat-marketplace-64ktb" Feb 28 04:37:27 crc kubenswrapper[5014]: I0228 04:37:27.989753 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d19bc223-12d6-45a9-87de-31ec3b6d9557-utilities\") pod \"redhat-marketplace-64ktb\" (UID: \"d19bc223-12d6-45a9-87de-31ec3b6d9557\") " pod="openshift-marketplace/redhat-marketplace-64ktb" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.014018 5014 scope.go:117] "RemoveContainer" containerID="dba19494527a57c18b2cba26233275af18bfd66b45a1e4cf3ab20e9732be14a4" Feb 28 04:37:28 crc kubenswrapper[5014]: E0228 04:37:28.019064 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dba19494527a57c18b2cba26233275af18bfd66b45a1e4cf3ab20e9732be14a4\": container with ID starting with dba19494527a57c18b2cba26233275af18bfd66b45a1e4cf3ab20e9732be14a4 not found: ID does not exist" containerID="dba19494527a57c18b2cba26233275af18bfd66b45a1e4cf3ab20e9732be14a4" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.019113 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dba19494527a57c18b2cba26233275af18bfd66b45a1e4cf3ab20e9732be14a4"} err="failed to get container status \"dba19494527a57c18b2cba26233275af18bfd66b45a1e4cf3ab20e9732be14a4\": rpc error: code = NotFound desc = could not find container \"dba19494527a57c18b2cba26233275af18bfd66b45a1e4cf3ab20e9732be14a4\": container with ID starting with dba19494527a57c18b2cba26233275af18bfd66b45a1e4cf3ab20e9732be14a4 not found: ID does not exist" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.027769 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kz2l\" (UniqueName: \"kubernetes.io/projected/d19bc223-12d6-45a9-87de-31ec3b6d9557-kube-api-access-5kz2l\") pod \"redhat-marketplace-64ktb\" (UID: \"d19bc223-12d6-45a9-87de-31ec3b6d9557\") " pod="openshift-marketplace/redhat-marketplace-64ktb" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.036348 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.036395 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.059991 5014 patch_prober.go:28] interesting pod/console-f9d7485db-n8xpb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.060059 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-n8xpb" podUID="8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.18:8443/health\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.096731 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" podStartSLOduration=187.096703679 podStartE2EDuration="3m7.096703679s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:28.095592188 +0000 UTC m=+236.765718098" watchObservedRunningTime="2026-02-28 04:37:28.096703679 +0000 UTC m=+236.766829589" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.117098 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bl64c"] Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.131879 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-64ktb" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.149290 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-bl64c"] Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.196080 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="797b2165-10cf-4886-a106-7f1010672030" path="/var/lib/kubelet/pods/797b2165-10cf-4886-a106-7f1010672030/volumes" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.196791 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.197351 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9aa43b2-c924-4e9d-8c38-cf39ee922d3e" path="/var/lib/kubelet/pods/b9aa43b2-c924-4e9d-8c38-cf39ee922d3e/volumes" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.199843 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-npdf6"] Feb 28 04:37:28 crc kubenswrapper[5014]: W0228 04:37:28.223357 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a00f74f_e858_42cc_b882_492afd45684d.slice/crio-7d89f646c6b2e1506360c194ff40f4142b8bf5415f10b823f60d7897fb86e1c4 WatchSource:0}: Error finding container 7d89f646c6b2e1506360c194ff40f4142b8bf5415f10b823f60d7897fb86e1c4: Status 404 returned error can't find the container with id 7d89f646c6b2e1506360c194ff40f4142b8bf5415f10b823f60d7897fb86e1c4 Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.339938 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5zq82"] Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.342516 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zq82" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.346331 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.355363 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5zq82"] Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.395664 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba9702f-9e04-46d4-9a98-92d5303383c4-utilities\") pod \"redhat-operators-5zq82\" (UID: \"bba9702f-9e04-46d4-9a98-92d5303383c4\") " pod="openshift-marketplace/redhat-operators-5zq82" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.396124 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-725hc\" (UniqueName: \"kubernetes.io/projected/bba9702f-9e04-46d4-9a98-92d5303383c4-kube-api-access-725hc\") pod \"redhat-operators-5zq82\" (UID: \"bba9702f-9e04-46d4-9a98-92d5303383c4\") " pod="openshift-marketplace/redhat-operators-5zq82" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.396196 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba9702f-9e04-46d4-9a98-92d5303383c4-catalog-content\") pod \"redhat-operators-5zq82\" (UID: \"bba9702f-9e04-46d4-9a98-92d5303383c4\") " pod="openshift-marketplace/redhat-operators-5zq82" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.400894 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-686c8df7df-28k2q"] Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.401633 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.405376 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.405798 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.405909 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.405999 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.406118 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.406330 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.413607 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.422610 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-64ktb"] Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.436042 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-686c8df7df-28k2q"] Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.497320 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed2fa586-d060-42df-9ef2-163f0dfa7c96-client-ca\") pod \"controller-manager-686c8df7df-28k2q\" (UID: \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\") " pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.497437 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba9702f-9e04-46d4-9a98-92d5303383c4-catalog-content\") pod \"redhat-operators-5zq82\" (UID: \"bba9702f-9e04-46d4-9a98-92d5303383c4\") " pod="openshift-marketplace/redhat-operators-5zq82" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.497506 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed2fa586-d060-42df-9ef2-163f0dfa7c96-serving-cert\") pod \"controller-manager-686c8df7df-28k2q\" (UID: \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\") " pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.497553 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed2fa586-d060-42df-9ef2-163f0dfa7c96-config\") pod \"controller-manager-686c8df7df-28k2q\" (UID: \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\") " pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.498525 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5twq\" (UniqueName: \"kubernetes.io/projected/ed2fa586-d060-42df-9ef2-163f0dfa7c96-kube-api-access-t5twq\") pod \"controller-manager-686c8df7df-28k2q\" (UID: \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\") " pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.498625 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba9702f-9e04-46d4-9a98-92d5303383c4-utilities\") pod \"redhat-operators-5zq82\" (UID: \"bba9702f-9e04-46d4-9a98-92d5303383c4\") " pod="openshift-marketplace/redhat-operators-5zq82" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.498910 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed2fa586-d060-42df-9ef2-163f0dfa7c96-proxy-ca-bundles\") pod \"controller-manager-686c8df7df-28k2q\" (UID: \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\") " pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.498968 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-725hc\" (UniqueName: \"kubernetes.io/projected/bba9702f-9e04-46d4-9a98-92d5303383c4-kube-api-access-725hc\") pod \"redhat-operators-5zq82\" (UID: \"bba9702f-9e04-46d4-9a98-92d5303383c4\") " pod="openshift-marketplace/redhat-operators-5zq82" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.499276 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba9702f-9e04-46d4-9a98-92d5303383c4-utilities\") pod \"redhat-operators-5zq82\" (UID: \"bba9702f-9e04-46d4-9a98-92d5303383c4\") " pod="openshift-marketplace/redhat-operators-5zq82" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.500870 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba9702f-9e04-46d4-9a98-92d5303383c4-catalog-content\") pod \"redhat-operators-5zq82\" (UID: \"bba9702f-9e04-46d4-9a98-92d5303383c4\") " pod="openshift-marketplace/redhat-operators-5zq82" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.526936 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-725hc\" (UniqueName: \"kubernetes.io/projected/bba9702f-9e04-46d4-9a98-92d5303383c4-kube-api-access-725hc\") pod \"redhat-operators-5zq82\" (UID: \"bba9702f-9e04-46d4-9a98-92d5303383c4\") " pod="openshift-marketplace/redhat-operators-5zq82" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.588462 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-pnbpp" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.600407 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed2fa586-d060-42df-9ef2-163f0dfa7c96-proxy-ca-bundles\") pod \"controller-manager-686c8df7df-28k2q\" (UID: \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\") " pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.600492 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed2fa586-d060-42df-9ef2-163f0dfa7c96-client-ca\") pod \"controller-manager-686c8df7df-28k2q\" (UID: \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\") " pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.600530 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed2fa586-d060-42df-9ef2-163f0dfa7c96-serving-cert\") pod \"controller-manager-686c8df7df-28k2q\" (UID: \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\") " pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.600550 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed2fa586-d060-42df-9ef2-163f0dfa7c96-config\") pod \"controller-manager-686c8df7df-28k2q\" (UID: \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\") " pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.600587 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5twq\" (UniqueName: \"kubernetes.io/projected/ed2fa586-d060-42df-9ef2-163f0dfa7c96-kube-api-access-t5twq\") pod \"controller-manager-686c8df7df-28k2q\" (UID: \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\") " pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.602087 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed2fa586-d060-42df-9ef2-163f0dfa7c96-config\") pod \"controller-manager-686c8df7df-28k2q\" (UID: \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\") " pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.602835 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed2fa586-d060-42df-9ef2-163f0dfa7c96-client-ca\") pod \"controller-manager-686c8df7df-28k2q\" (UID: \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\") " pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.604550 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed2fa586-d060-42df-9ef2-163f0dfa7c96-proxy-ca-bundles\") pod \"controller-manager-686c8df7df-28k2q\" (UID: \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\") " pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.611257 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed2fa586-d060-42df-9ef2-163f0dfa7c96-serving-cert\") pod \"controller-manager-686c8df7df-28k2q\" (UID: \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\") " pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.634376 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5twq\" (UniqueName: \"kubernetes.io/projected/ed2fa586-d060-42df-9ef2-163f0dfa7c96-kube-api-access-t5twq\") pod \"controller-manager-686c8df7df-28k2q\" (UID: \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\") " pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.670449 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zq82" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.738522 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hx7qb"] Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.739772 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hx7qb" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.744195 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.777967 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hx7qb"] Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.803115 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4-catalog-content\") pod \"redhat-operators-hx7qb\" (UID: \"3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4\") " pod="openshift-marketplace/redhat-operators-hx7qb" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.803524 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q5qx\" (UniqueName: \"kubernetes.io/projected/3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4-kube-api-access-6q5qx\") pod \"redhat-operators-hx7qb\" (UID: \"3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4\") " pod="openshift-marketplace/redhat-operators-hx7qb" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.803575 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4-utilities\") pod \"redhat-operators-hx7qb\" (UID: \"3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4\") " pod="openshift-marketplace/redhat-operators-hx7qb" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.904826 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4-catalog-content\") pod \"redhat-operators-hx7qb\" (UID: \"3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4\") " pod="openshift-marketplace/redhat-operators-hx7qb" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.904931 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6q5qx\" (UniqueName: \"kubernetes.io/projected/3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4-kube-api-access-6q5qx\") pod \"redhat-operators-hx7qb\" (UID: \"3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4\") " pod="openshift-marketplace/redhat-operators-hx7qb" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.904994 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4-utilities\") pod \"redhat-operators-hx7qb\" (UID: \"3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4\") " pod="openshift-marketplace/redhat-operators-hx7qb" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.905510 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4-utilities\") pod \"redhat-operators-hx7qb\" (UID: \"3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4\") " pod="openshift-marketplace/redhat-operators-hx7qb" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.905756 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4-catalog-content\") pod \"redhat-operators-hx7qb\" (UID: \"3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4\") " pod="openshift-marketplace/redhat-operators-hx7qb" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.927871 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-h4z55" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.932858 5014 patch_prober.go:28] interesting pod/router-default-5444994796-h4z55 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 28 04:37:28 crc kubenswrapper[5014]: [-]has-synced failed: reason withheld Feb 28 04:37:28 crc kubenswrapper[5014]: [+]process-running ok Feb 28 04:37:28 crc kubenswrapper[5014]: healthz check failed Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.932925 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h4z55" podUID="6d15d34d-14e0-4eb1-a442-d7e9ec65f8d3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.951405 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6q5qx\" (UniqueName: \"kubernetes.io/projected/3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4-kube-api-access-6q5qx\") pod \"redhat-operators-hx7qb\" (UID: \"3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4\") " pod="openshift-marketplace/redhat-operators-hx7qb" Feb 28 04:37:28 crc kubenswrapper[5014]: I0228 04:37:28.962229 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wxczw" Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.064332 5014 generic.go:334] "Generic (PLEG): container finished" podID="6c95e257-4b60-4a76-9849-7c5daa13b539" containerID="3f289dec2939baa6e393a280533fb8e34bb070f1fe990d73657cd1496889f9f5" exitCode=0 Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.064548 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6c95e257-4b60-4a76-9849-7c5daa13b539","Type":"ContainerDied","Data":"3f289dec2939baa6e393a280533fb8e34bb070f1fe990d73657cd1496889f9f5"} Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.074712 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hx7qb" Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.100038 5014 generic.go:334] "Generic (PLEG): container finished" podID="8a00f74f-e858-42cc-b882-492afd45684d" containerID="78a1f9fe3660c5e4df91192578063f1d3478cf19cd86a2a827284a8bb11b40fe" exitCode=0 Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.100119 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npdf6" event={"ID":"8a00f74f-e858-42cc-b882-492afd45684d","Type":"ContainerDied","Data":"78a1f9fe3660c5e4df91192578063f1d3478cf19cd86a2a827284a8bb11b40fe"} Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.100146 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npdf6" event={"ID":"8a00f74f-e858-42cc-b882-492afd45684d","Type":"ContainerStarted","Data":"7d89f646c6b2e1506360c194ff40f4142b8bf5415f10b823f60d7897fb86e1c4"} Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.116908 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" event={"ID":"0b36d7a0-d5d2-460c-a0a6-03b165ae5740","Type":"ContainerStarted","Data":"c6d3635dfb45b1ca176d5efd1b5f8a6c28f1a128ee1adf2e98f455b8d1293c5d"} Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.116970 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" event={"ID":"0b36d7a0-d5d2-460c-a0a6-03b165ae5740","Type":"ContainerStarted","Data":"7dcd765f9c3c77ae21fe13b8c6346f40d25742483055a91a6d107e58e2d5ad1d"} Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.118186 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.158487 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.176135 5014 generic.go:334] "Generic (PLEG): container finished" podID="d19bc223-12d6-45a9-87de-31ec3b6d9557" containerID="c41e25e52b50b03c3a060af4363a93cb1b4e95573b3ffe1924132c052ecc75d7" exitCode=0 Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.176684 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64ktb" event={"ID":"d19bc223-12d6-45a9-87de-31ec3b6d9557","Type":"ContainerDied","Data":"c41e25e52b50b03c3a060af4363a93cb1b4e95573b3ffe1924132c052ecc75d7"} Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.176775 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64ktb" event={"ID":"d19bc223-12d6-45a9-87de-31ec3b6d9557","Type":"ContainerStarted","Data":"e5e5e684aa05b3ad8ec2cdb411d01943990413cbcc0f061f1bc1e7656e4f53cd"} Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.186206 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" podStartSLOduration=3.186172616 podStartE2EDuration="3.186172616s" podCreationTimestamp="2026-02-28 04:37:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:29.159053507 +0000 UTC m=+237.829179437" watchObservedRunningTime="2026-02-28 04:37:29.186172616 +0000 UTC m=+237.856298526" Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.331902 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-686c8df7df-28k2q"] Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.454385 5014 ???:1] "http: TLS handshake error from 192.168.126.11:47460: no serving certificate available for the kubelet" Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.466223 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.477988 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.485789 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.486033 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.500595 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.522325 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5zq82"] Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.546569 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cdd88c1d-029f-41d1-8c81-f160bb086a17-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cdd88c1d-029f-41d1-8c81-f160bb086a17\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.546634 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cdd88c1d-029f-41d1-8c81-f160bb086a17-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cdd88c1d-029f-41d1-8c81-f160bb086a17\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.648608 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cdd88c1d-029f-41d1-8c81-f160bb086a17-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cdd88c1d-029f-41d1-8c81-f160bb086a17\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.649092 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cdd88c1d-029f-41d1-8c81-f160bb086a17-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cdd88c1d-029f-41d1-8c81-f160bb086a17\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.648789 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cdd88c1d-029f-41d1-8c81-f160bb086a17-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cdd88c1d-029f-41d1-8c81-f160bb086a17\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.682976 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cdd88c1d-029f-41d1-8c81-f160bb086a17-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cdd88c1d-029f-41d1-8c81-f160bb086a17\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.804309 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.943155 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-h4z55" Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.953333 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-h4z55" Feb 28 04:37:29 crc kubenswrapper[5014]: I0228 04:37:29.996779 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hx7qb"] Feb 28 04:37:30 crc kubenswrapper[5014]: I0228 04:37:30.231905 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hx7qb" event={"ID":"3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4","Type":"ContainerStarted","Data":"65c0f0a2899c1b046011e4457c5aa3352a06e206a9a8a12b5b8f747cff62ba73"} Feb 28 04:37:30 crc kubenswrapper[5014]: I0228 04:37:30.238282 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zq82" event={"ID":"bba9702f-9e04-46d4-9a98-92d5303383c4","Type":"ContainerStarted","Data":"33d54e0fd535d7b84533ec8eaf3fd608cf7cf5a85c89fdb6273a18b0308dab02"} Feb 28 04:37:30 crc kubenswrapper[5014]: I0228 04:37:30.248217 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" event={"ID":"ed2fa586-d060-42df-9ef2-163f0dfa7c96","Type":"ContainerStarted","Data":"5581f74c2e97579db05acee5d15062efb3164bb2ef1e1652a7eb32cdab02ab0f"} Feb 28 04:37:30 crc kubenswrapper[5014]: I0228 04:37:30.248270 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" event={"ID":"ed2fa586-d060-42df-9ef2-163f0dfa7c96","Type":"ContainerStarted","Data":"3541dae7e66bd8fb3bc307c7fc362f519fb8bec9f9ed00ecc749c856e36b44a7"} Feb 28 04:37:30 crc kubenswrapper[5014]: I0228 04:37:30.249819 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" Feb 28 04:37:30 crc kubenswrapper[5014]: I0228 04:37:30.258673 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" Feb 28 04:37:30 crc kubenswrapper[5014]: I0228 04:37:30.336610 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" podStartSLOduration=4.336583343 podStartE2EDuration="4.336583343s" podCreationTimestamp="2026-02-28 04:37:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:30.275036706 +0000 UTC m=+238.945162616" watchObservedRunningTime="2026-02-28 04:37:30.336583343 +0000 UTC m=+239.006709253" Feb 28 04:37:30 crc kubenswrapper[5014]: I0228 04:37:30.511910 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 28 04:37:30 crc kubenswrapper[5014]: I0228 04:37:30.816866 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 04:37:30 crc kubenswrapper[5014]: I0228 04:37:30.895078 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c95e257-4b60-4a76-9849-7c5daa13b539-kube-api-access\") pod \"6c95e257-4b60-4a76-9849-7c5daa13b539\" (UID: \"6c95e257-4b60-4a76-9849-7c5daa13b539\") " Feb 28 04:37:30 crc kubenswrapper[5014]: I0228 04:37:30.895355 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c95e257-4b60-4a76-9849-7c5daa13b539-kubelet-dir\") pod \"6c95e257-4b60-4a76-9849-7c5daa13b539\" (UID: \"6c95e257-4b60-4a76-9849-7c5daa13b539\") " Feb 28 04:37:30 crc kubenswrapper[5014]: I0228 04:37:30.895778 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c95e257-4b60-4a76-9849-7c5daa13b539-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6c95e257-4b60-4a76-9849-7c5daa13b539" (UID: "6c95e257-4b60-4a76-9849-7c5daa13b539"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:37:30 crc kubenswrapper[5014]: I0228 04:37:30.918561 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c95e257-4b60-4a76-9849-7c5daa13b539-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6c95e257-4b60-4a76-9849-7c5daa13b539" (UID: "6c95e257-4b60-4a76-9849-7c5daa13b539"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:37:31 crc kubenswrapper[5014]: I0228 04:37:31.001045 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c95e257-4b60-4a76-9849-7c5daa13b539-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 04:37:31 crc kubenswrapper[5014]: I0228 04:37:31.001092 5014 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c95e257-4b60-4a76-9849-7c5daa13b539-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 28 04:37:31 crc kubenswrapper[5014]: I0228 04:37:31.105542 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-dpwrd" Feb 28 04:37:31 crc kubenswrapper[5014]: I0228 04:37:31.290209 5014 generic.go:334] "Generic (PLEG): container finished" podID="bba9702f-9e04-46d4-9a98-92d5303383c4" containerID="a5c70c1addd5fd7d86bdc3ae5cecdcff87614886c9d9a3aff217b654a05fa6a9" exitCode=0 Feb 28 04:37:31 crc kubenswrapper[5014]: I0228 04:37:31.290308 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zq82" event={"ID":"bba9702f-9e04-46d4-9a98-92d5303383c4","Type":"ContainerDied","Data":"a5c70c1addd5fd7d86bdc3ae5cecdcff87614886c9d9a3aff217b654a05fa6a9"} Feb 28 04:37:31 crc kubenswrapper[5014]: I0228 04:37:31.300963 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"6c95e257-4b60-4a76-9849-7c5daa13b539","Type":"ContainerDied","Data":"e807e24de1c37e73d79f713635444760b45c46446f1a6e929fcce6188331ff98"} Feb 28 04:37:31 crc kubenswrapper[5014]: I0228 04:37:31.301006 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e807e24de1c37e73d79f713635444760b45c46446f1a6e929fcce6188331ff98" Feb 28 04:37:31 crc kubenswrapper[5014]: I0228 04:37:31.301089 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 28 04:37:31 crc kubenswrapper[5014]: I0228 04:37:31.306759 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cdd88c1d-029f-41d1-8c81-f160bb086a17","Type":"ContainerStarted","Data":"da4897343e03540ccaea6c390a6c57fc8c864a7070c2da1d1fbcd132ac97c78f"} Feb 28 04:37:31 crc kubenswrapper[5014]: I0228 04:37:31.322124 5014 generic.go:334] "Generic (PLEG): container finished" podID="3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4" containerID="84a3abd3ab4bb703e7973b0142c129a4022fe7222a1d73ed97112ff58f993a49" exitCode=0 Feb 28 04:37:31 crc kubenswrapper[5014]: I0228 04:37:31.325538 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hx7qb" event={"ID":"3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4","Type":"ContainerDied","Data":"84a3abd3ab4bb703e7973b0142c129a4022fe7222a1d73ed97112ff58f993a49"} Feb 28 04:37:32 crc kubenswrapper[5014]: I0228 04:37:32.371058 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cdd88c1d-029f-41d1-8c81-f160bb086a17","Type":"ContainerStarted","Data":"fdb53f6eca32c5a68ca6d4dbfbfd909b7e5bb182de7d2f83c3b2487843119b9c"} Feb 28 04:37:32 crc kubenswrapper[5014]: I0228 04:37:32.398229 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.398182425 podStartE2EDuration="3.398182425s" podCreationTimestamp="2026-02-28 04:37:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:37:32.392079392 +0000 UTC m=+241.062205322" watchObservedRunningTime="2026-02-28 04:37:32.398182425 +0000 UTC m=+241.068308335" Feb 28 04:37:33 crc kubenswrapper[5014]: I0228 04:37:33.376029 5014 generic.go:334] "Generic (PLEG): container finished" podID="cdd88c1d-029f-41d1-8c81-f160bb086a17" containerID="fdb53f6eca32c5a68ca6d4dbfbfd909b7e5bb182de7d2f83c3b2487843119b9c" exitCode=0 Feb 28 04:37:33 crc kubenswrapper[5014]: I0228 04:37:33.376588 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cdd88c1d-029f-41d1-8c81-f160bb086a17","Type":"ContainerDied","Data":"fdb53f6eca32c5a68ca6d4dbfbfd909b7e5bb182de7d2f83c3b2487843119b9c"} Feb 28 04:37:33 crc kubenswrapper[5014]: I0228 04:37:33.625444 5014 ???:1] "http: TLS handshake error from 192.168.126.11:47476: no serving certificate available for the kubelet" Feb 28 04:37:34 crc kubenswrapper[5014]: I0228 04:37:34.637611 5014 ???:1] "http: TLS handshake error from 192.168.126.11:47478: no serving certificate available for the kubelet" Feb 28 04:37:34 crc kubenswrapper[5014]: I0228 04:37:34.916599 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 04:37:35 crc kubenswrapper[5014]: I0228 04:37:35.098246 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cdd88c1d-029f-41d1-8c81-f160bb086a17-kubelet-dir\") pod \"cdd88c1d-029f-41d1-8c81-f160bb086a17\" (UID: \"cdd88c1d-029f-41d1-8c81-f160bb086a17\") " Feb 28 04:37:35 crc kubenswrapper[5014]: I0228 04:37:35.098374 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdd88c1d-029f-41d1-8c81-f160bb086a17-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cdd88c1d-029f-41d1-8c81-f160bb086a17" (UID: "cdd88c1d-029f-41d1-8c81-f160bb086a17"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:37:35 crc kubenswrapper[5014]: I0228 04:37:35.098420 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cdd88c1d-029f-41d1-8c81-f160bb086a17-kube-api-access\") pod \"cdd88c1d-029f-41d1-8c81-f160bb086a17\" (UID: \"cdd88c1d-029f-41d1-8c81-f160bb086a17\") " Feb 28 04:37:35 crc kubenswrapper[5014]: I0228 04:37:35.099414 5014 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cdd88c1d-029f-41d1-8c81-f160bb086a17-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 28 04:37:35 crc kubenswrapper[5014]: I0228 04:37:35.108150 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdd88c1d-029f-41d1-8c81-f160bb086a17-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cdd88c1d-029f-41d1-8c81-f160bb086a17" (UID: "cdd88c1d-029f-41d1-8c81-f160bb086a17"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:37:35 crc kubenswrapper[5014]: I0228 04:37:35.201221 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cdd88c1d-029f-41d1-8c81-f160bb086a17-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 04:37:35 crc kubenswrapper[5014]: I0228 04:37:35.407384 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cdd88c1d-029f-41d1-8c81-f160bb086a17","Type":"ContainerDied","Data":"da4897343e03540ccaea6c390a6c57fc8c864a7070c2da1d1fbcd132ac97c78f"} Feb 28 04:37:35 crc kubenswrapper[5014]: I0228 04:37:35.407740 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da4897343e03540ccaea6c390a6c57fc8c864a7070c2da1d1fbcd132ac97c78f" Feb 28 04:37:35 crc kubenswrapper[5014]: I0228 04:37:35.407702 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 28 04:37:36 crc kubenswrapper[5014]: I0228 04:37:36.523251 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs\") pod \"network-metrics-daemon-rqllg\" (UID: \"a2258094-df28-401d-aa20-0931bedcb66b\") " pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:37:36 crc kubenswrapper[5014]: I0228 04:37:36.525414 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 28 04:37:36 crc kubenswrapper[5014]: I0228 04:37:36.553002 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a2258094-df28-401d-aa20-0931bedcb66b-metrics-certs\") pod \"network-metrics-daemon-rqllg\" (UID: \"a2258094-df28-401d-aa20-0931bedcb66b\") " pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:37:36 crc kubenswrapper[5014]: I0228 04:37:36.596720 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 28 04:37:36 crc kubenswrapper[5014]: I0228 04:37:36.605558 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-rqllg" Feb 28 04:37:37 crc kubenswrapper[5014]: I0228 04:37:37.842677 5014 patch_prober.go:28] interesting pod/downloads-7954f5f757-cghpw container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 28 04:37:37 crc kubenswrapper[5014]: I0228 04:37:37.843120 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-cghpw" podUID="9f80824d-7fc7-44e3-982c-2856a99523be" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 28 04:37:37 crc kubenswrapper[5014]: I0228 04:37:37.843044 5014 patch_prober.go:28] interesting pod/downloads-7954f5f757-cghpw container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" start-of-body= Feb 28 04:37:37 crc kubenswrapper[5014]: I0228 04:37:37.843240 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cghpw" podUID="9f80824d-7fc7-44e3-982c-2856a99523be" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.14:8080/\": dial tcp 10.217.0.14:8080: connect: connection refused" Feb 28 04:37:38 crc kubenswrapper[5014]: I0228 04:37:38.292283 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:38 crc kubenswrapper[5014]: I0228 04:37:38.296341 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:37:45 crc kubenswrapper[5014]: I0228 04:37:45.104643 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-686c8df7df-28k2q"] Feb 28 04:37:45 crc kubenswrapper[5014]: I0228 04:37:45.105372 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" podUID="ed2fa586-d060-42df-9ef2-163f0dfa7c96" containerName="controller-manager" containerID="cri-o://5581f74c2e97579db05acee5d15062efb3164bb2ef1e1652a7eb32cdab02ab0f" gracePeriod=30 Feb 28 04:37:45 crc kubenswrapper[5014]: I0228 04:37:45.117748 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z"] Feb 28 04:37:45 crc kubenswrapper[5014]: I0228 04:37:45.118073 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" podUID="0b36d7a0-d5d2-460c-a0a6-03b165ae5740" containerName="route-controller-manager" containerID="cri-o://c6d3635dfb45b1ca176d5efd1b5f8a6c28f1a128ee1adf2e98f455b8d1293c5d" gracePeriod=30 Feb 28 04:37:45 crc kubenswrapper[5014]: I0228 04:37:45.706945 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:37:45 crc kubenswrapper[5014]: I0228 04:37:45.707054 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:37:47 crc kubenswrapper[5014]: I0228 04:37:47.013239 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:37:47 crc kubenswrapper[5014]: I0228 04:37:47.211408 5014 patch_prober.go:28] interesting pod/route-controller-manager-5dccb57d9c-p9k9z container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" start-of-body= Feb 28 04:37:47 crc kubenswrapper[5014]: I0228 04:37:47.211515 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" podUID="0b36d7a0-d5d2-460c-a0a6-03b165ae5740" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" Feb 28 04:37:47 crc kubenswrapper[5014]: I0228 04:37:47.886197 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-cghpw" Feb 28 04:37:48 crc kubenswrapper[5014]: I0228 04:37:48.543115 5014 generic.go:334] "Generic (PLEG): container finished" podID="0b36d7a0-d5d2-460c-a0a6-03b165ae5740" containerID="c6d3635dfb45b1ca176d5efd1b5f8a6c28f1a128ee1adf2e98f455b8d1293c5d" exitCode=0 Feb 28 04:37:48 crc kubenswrapper[5014]: I0228 04:37:48.543188 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" event={"ID":"0b36d7a0-d5d2-460c-a0a6-03b165ae5740","Type":"ContainerDied","Data":"c6d3635dfb45b1ca176d5efd1b5f8a6c28f1a128ee1adf2e98f455b8d1293c5d"} Feb 28 04:37:49 crc kubenswrapper[5014]: I0228 04:37:49.746104 5014 patch_prober.go:28] interesting pod/controller-manager-686c8df7df-28k2q container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 04:37:49 crc kubenswrapper[5014]: I0228 04:37:49.746234 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" podUID="ed2fa586-d060-42df-9ef2-163f0dfa7c96" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 04:37:55 crc kubenswrapper[5014]: I0228 04:37:55.149245 5014 ???:1] "http: TLS handshake error from 192.168.126.11:56248: no serving certificate available for the kubelet" Feb 28 04:37:57 crc kubenswrapper[5014]: I0228 04:37:57.211663 5014 patch_prober.go:28] interesting pod/route-controller-manager-5dccb57d9c-p9k9z container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" start-of-body= Feb 28 04:37:57 crc kubenswrapper[5014]: I0228 04:37:57.212360 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" podUID="0b36d7a0-d5d2-460c-a0a6-03b165ae5740" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" Feb 28 04:37:58 crc kubenswrapper[5014]: I0228 04:37:58.321204 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-q5llk" Feb 28 04:37:59 crc kubenswrapper[5014]: I0228 04:37:59.745556 5014 patch_prober.go:28] interesting pod/controller-manager-686c8df7df-28k2q container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 04:37:59 crc kubenswrapper[5014]: I0228 04:37:59.745648 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" podUID="ed2fa586-d060-42df-9ef2-163f0dfa7c96" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 04:38:00 crc kubenswrapper[5014]: I0228 04:38:00.146995 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537558-4hwb5"] Feb 28 04:38:00 crc kubenswrapper[5014]: E0228 04:38:00.147411 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c95e257-4b60-4a76-9849-7c5daa13b539" containerName="pruner" Feb 28 04:38:00 crc kubenswrapper[5014]: I0228 04:38:00.147443 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c95e257-4b60-4a76-9849-7c5daa13b539" containerName="pruner" Feb 28 04:38:00 crc kubenswrapper[5014]: E0228 04:38:00.147470 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd88c1d-029f-41d1-8c81-f160bb086a17" containerName="pruner" Feb 28 04:38:00 crc kubenswrapper[5014]: I0228 04:38:00.147489 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd88c1d-029f-41d1-8c81-f160bb086a17" containerName="pruner" Feb 28 04:38:00 crc kubenswrapper[5014]: I0228 04:38:00.147680 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdd88c1d-029f-41d1-8c81-f160bb086a17" containerName="pruner" Feb 28 04:38:00 crc kubenswrapper[5014]: I0228 04:38:00.147720 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c95e257-4b60-4a76-9849-7c5daa13b539" containerName="pruner" Feb 28 04:38:00 crc kubenswrapper[5014]: I0228 04:38:00.148419 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537558-4hwb5" Feb 28 04:38:00 crc kubenswrapper[5014]: I0228 04:38:00.153080 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 04:38:00 crc kubenswrapper[5014]: I0228 04:38:00.161912 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537558-4hwb5"] Feb 28 04:38:00 crc kubenswrapper[5014]: I0228 04:38:00.232636 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cvst\" (UniqueName: \"kubernetes.io/projected/1943af29-93f0-470e-85e8-4d53409329ae-kube-api-access-6cvst\") pod \"auto-csr-approver-29537558-4hwb5\" (UID: \"1943af29-93f0-470e-85e8-4d53409329ae\") " pod="openshift-infra/auto-csr-approver-29537558-4hwb5" Feb 28 04:38:00 crc kubenswrapper[5014]: I0228 04:38:00.335190 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cvst\" (UniqueName: \"kubernetes.io/projected/1943af29-93f0-470e-85e8-4d53409329ae-kube-api-access-6cvst\") pod \"auto-csr-approver-29537558-4hwb5\" (UID: \"1943af29-93f0-470e-85e8-4d53409329ae\") " pod="openshift-infra/auto-csr-approver-29537558-4hwb5" Feb 28 04:38:00 crc kubenswrapper[5014]: I0228 04:38:00.356685 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cvst\" (UniqueName: \"kubernetes.io/projected/1943af29-93f0-470e-85e8-4d53409329ae-kube-api-access-6cvst\") pod \"auto-csr-approver-29537558-4hwb5\" (UID: \"1943af29-93f0-470e-85e8-4d53409329ae\") " pod="openshift-infra/auto-csr-approver-29537558-4hwb5" Feb 28 04:38:00 crc kubenswrapper[5014]: I0228 04:38:00.472250 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537558-4hwb5" Feb 28 04:38:02 crc kubenswrapper[5014]: I0228 04:38:02.664689 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 28 04:38:02 crc kubenswrapper[5014]: I0228 04:38:02.666013 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 04:38:02 crc kubenswrapper[5014]: I0228 04:38:02.671171 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 28 04:38:02 crc kubenswrapper[5014]: I0228 04:38:02.671544 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 28 04:38:02 crc kubenswrapper[5014]: I0228 04:38:02.674648 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 28 04:38:02 crc kubenswrapper[5014]: I0228 04:38:02.689602 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fe2969a7-da7d-4775-85cd-457fa5467c79-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fe2969a7-da7d-4775-85cd-457fa5467c79\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 04:38:02 crc kubenswrapper[5014]: I0228 04:38:02.689701 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fe2969a7-da7d-4775-85cd-457fa5467c79-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fe2969a7-da7d-4775-85cd-457fa5467c79\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 04:38:02 crc kubenswrapper[5014]: I0228 04:38:02.791049 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fe2969a7-da7d-4775-85cd-457fa5467c79-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fe2969a7-da7d-4775-85cd-457fa5467c79\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 04:38:02 crc kubenswrapper[5014]: I0228 04:38:02.791125 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fe2969a7-da7d-4775-85cd-457fa5467c79-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fe2969a7-da7d-4775-85cd-457fa5467c79\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 04:38:02 crc kubenswrapper[5014]: I0228 04:38:02.791324 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fe2969a7-da7d-4775-85cd-457fa5467c79-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"fe2969a7-da7d-4775-85cd-457fa5467c79\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 04:38:02 crc kubenswrapper[5014]: I0228 04:38:02.819376 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fe2969a7-da7d-4775-85cd-457fa5467c79-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"fe2969a7-da7d-4775-85cd-457fa5467c79\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 04:38:03 crc kubenswrapper[5014]: I0228 04:38:03.004645 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 04:38:03 crc kubenswrapper[5014]: I0228 04:38:03.650894 5014 generic.go:334] "Generic (PLEG): container finished" podID="ed2fa586-d060-42df-9ef2-163f0dfa7c96" containerID="5581f74c2e97579db05acee5d15062efb3164bb2ef1e1652a7eb32cdab02ab0f" exitCode=0 Feb 28 04:38:03 crc kubenswrapper[5014]: I0228 04:38:03.651028 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" event={"ID":"ed2fa586-d060-42df-9ef2-163f0dfa7c96","Type":"ContainerDied","Data":"5581f74c2e97579db05acee5d15062efb3164bb2ef1e1652a7eb32cdab02ab0f"} Feb 28 04:38:05 crc kubenswrapper[5014]: I0228 04:38:05.218331 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" Feb 28 04:38:05 crc kubenswrapper[5014]: I0228 04:38:05.338386 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b36d7a0-d5d2-460c-a0a6-03b165ae5740-serving-cert\") pod \"0b36d7a0-d5d2-460c-a0a6-03b165ae5740\" (UID: \"0b36d7a0-d5d2-460c-a0a6-03b165ae5740\") " Feb 28 04:38:05 crc kubenswrapper[5014]: I0228 04:38:05.338456 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjp82\" (UniqueName: \"kubernetes.io/projected/0b36d7a0-d5d2-460c-a0a6-03b165ae5740-kube-api-access-fjp82\") pod \"0b36d7a0-d5d2-460c-a0a6-03b165ae5740\" (UID: \"0b36d7a0-d5d2-460c-a0a6-03b165ae5740\") " Feb 28 04:38:05 crc kubenswrapper[5014]: I0228 04:38:05.338644 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b36d7a0-d5d2-460c-a0a6-03b165ae5740-config\") pod \"0b36d7a0-d5d2-460c-a0a6-03b165ae5740\" (UID: \"0b36d7a0-d5d2-460c-a0a6-03b165ae5740\") " Feb 28 04:38:05 crc kubenswrapper[5014]: I0228 04:38:05.338684 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b36d7a0-d5d2-460c-a0a6-03b165ae5740-client-ca\") pod \"0b36d7a0-d5d2-460c-a0a6-03b165ae5740\" (UID: \"0b36d7a0-d5d2-460c-a0a6-03b165ae5740\") " Feb 28 04:38:05 crc kubenswrapper[5014]: I0228 04:38:05.339722 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b36d7a0-d5d2-460c-a0a6-03b165ae5740-client-ca" (OuterVolumeSpecName: "client-ca") pod "0b36d7a0-d5d2-460c-a0a6-03b165ae5740" (UID: "0b36d7a0-d5d2-460c-a0a6-03b165ae5740"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:38:05 crc kubenswrapper[5014]: I0228 04:38:05.339947 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b36d7a0-d5d2-460c-a0a6-03b165ae5740-config" (OuterVolumeSpecName: "config") pod "0b36d7a0-d5d2-460c-a0a6-03b165ae5740" (UID: "0b36d7a0-d5d2-460c-a0a6-03b165ae5740"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:38:05 crc kubenswrapper[5014]: I0228 04:38:05.340328 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b36d7a0-d5d2-460c-a0a6-03b165ae5740-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:38:05 crc kubenswrapper[5014]: I0228 04:38:05.340367 5014 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0b36d7a0-d5d2-460c-a0a6-03b165ae5740-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:38:05 crc kubenswrapper[5014]: I0228 04:38:05.345002 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b36d7a0-d5d2-460c-a0a6-03b165ae5740-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b36d7a0-d5d2-460c-a0a6-03b165ae5740" (UID: "0b36d7a0-d5d2-460c-a0a6-03b165ae5740"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:38:05 crc kubenswrapper[5014]: I0228 04:38:05.346069 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b36d7a0-d5d2-460c-a0a6-03b165ae5740-kube-api-access-fjp82" (OuterVolumeSpecName: "kube-api-access-fjp82") pod "0b36d7a0-d5d2-460c-a0a6-03b165ae5740" (UID: "0b36d7a0-d5d2-460c-a0a6-03b165ae5740"). InnerVolumeSpecName "kube-api-access-fjp82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:38:05 crc kubenswrapper[5014]: I0228 04:38:05.441977 5014 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b36d7a0-d5d2-460c-a0a6-03b165ae5740-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:38:05 crc kubenswrapper[5014]: I0228 04:38:05.442066 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjp82\" (UniqueName: \"kubernetes.io/projected/0b36d7a0-d5d2-460c-a0a6-03b165ae5740-kube-api-access-fjp82\") on node \"crc\" DevicePath \"\"" Feb 28 04:38:05 crc kubenswrapper[5014]: I0228 04:38:05.666164 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" event={"ID":"0b36d7a0-d5d2-460c-a0a6-03b165ae5740","Type":"ContainerDied","Data":"7dcd765f9c3c77ae21fe13b8c6346f40d25742483055a91a6d107e58e2d5ad1d"} Feb 28 04:38:05 crc kubenswrapper[5014]: I0228 04:38:05.666234 5014 scope.go:117] "RemoveContainer" containerID="c6d3635dfb45b1ca176d5efd1b5f8a6c28f1a128ee1adf2e98f455b8d1293c5d" Feb 28 04:38:05 crc kubenswrapper[5014]: I0228 04:38:05.666259 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z" Feb 28 04:38:05 crc kubenswrapper[5014]: I0228 04:38:05.702076 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z"] Feb 28 04:38:05 crc kubenswrapper[5014]: I0228 04:38:05.705689 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5dccb57d9c-p9k9z"] Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.179168 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b36d7a0-d5d2-460c-a0a6-03b165ae5740" path="/var/lib/kubelet/pods/0b36d7a0-d5d2-460c-a0a6-03b165ae5740/volumes" Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.438231 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l"] Feb 28 04:38:06 crc kubenswrapper[5014]: E0228 04:38:06.438675 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b36d7a0-d5d2-460c-a0a6-03b165ae5740" containerName="route-controller-manager" Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.438702 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b36d7a0-d5d2-460c-a0a6-03b165ae5740" containerName="route-controller-manager" Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.438957 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b36d7a0-d5d2-460c-a0a6-03b165ae5740" containerName="route-controller-manager" Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.439710 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.442567 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.442938 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.443543 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.445156 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.446388 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.447767 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.455847 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l"] Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.562937 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2k5sz\" (UniqueName: \"kubernetes.io/projected/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72-kube-api-access-2k5sz\") pod \"route-controller-manager-866776bd7-58b5l\" (UID: \"3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72\") " pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.563546 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72-config\") pod \"route-controller-manager-866776bd7-58b5l\" (UID: \"3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72\") " pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.563691 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72-serving-cert\") pod \"route-controller-manager-866776bd7-58b5l\" (UID: \"3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72\") " pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.563839 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72-client-ca\") pod \"route-controller-manager-866776bd7-58b5l\" (UID: \"3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72\") " pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.664782 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72-config\") pod \"route-controller-manager-866776bd7-58b5l\" (UID: \"3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72\") " pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.665229 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72-serving-cert\") pod \"route-controller-manager-866776bd7-58b5l\" (UID: \"3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72\") " pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.665352 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72-client-ca\") pod \"route-controller-manager-866776bd7-58b5l\" (UID: \"3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72\") " pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.665495 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2k5sz\" (UniqueName: \"kubernetes.io/projected/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72-kube-api-access-2k5sz\") pod \"route-controller-manager-866776bd7-58b5l\" (UID: \"3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72\") " pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.666592 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72-config\") pod \"route-controller-manager-866776bd7-58b5l\" (UID: \"3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72\") " pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.666718 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72-client-ca\") pod \"route-controller-manager-866776bd7-58b5l\" (UID: \"3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72\") " pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.677647 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72-serving-cert\") pod \"route-controller-manager-866776bd7-58b5l\" (UID: \"3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72\") " pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.692227 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2k5sz\" (UniqueName: \"kubernetes.io/projected/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72-kube-api-access-2k5sz\") pod \"route-controller-manager-866776bd7-58b5l\" (UID: \"3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72\") " pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" Feb 28 04:38:06 crc kubenswrapper[5014]: I0228 04:38:06.777307 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" Feb 28 04:38:07 crc kubenswrapper[5014]: I0228 04:38:07.655078 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 28 04:38:07 crc kubenswrapper[5014]: I0228 04:38:07.662392 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 28 04:38:07 crc kubenswrapper[5014]: I0228 04:38:07.667707 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 28 04:38:07 crc kubenswrapper[5014]: I0228 04:38:07.783011 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f5ec05c-1bc0-41ad-9135-05564f8e3192-kube-api-access\") pod \"installer-9-crc\" (UID: \"8f5ec05c-1bc0-41ad-9135-05564f8e3192\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 04:38:07 crc kubenswrapper[5014]: I0228 04:38:07.783250 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f5ec05c-1bc0-41ad-9135-05564f8e3192-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8f5ec05c-1bc0-41ad-9135-05564f8e3192\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 04:38:07 crc kubenswrapper[5014]: I0228 04:38:07.783311 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f5ec05c-1bc0-41ad-9135-05564f8e3192-var-lock\") pod \"installer-9-crc\" (UID: \"8f5ec05c-1bc0-41ad-9135-05564f8e3192\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 04:38:07 crc kubenswrapper[5014]: I0228 04:38:07.884626 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f5ec05c-1bc0-41ad-9135-05564f8e3192-kube-api-access\") pod \"installer-9-crc\" (UID: \"8f5ec05c-1bc0-41ad-9135-05564f8e3192\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 04:38:07 crc kubenswrapper[5014]: I0228 04:38:07.884769 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f5ec05c-1bc0-41ad-9135-05564f8e3192-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8f5ec05c-1bc0-41ad-9135-05564f8e3192\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 04:38:07 crc kubenswrapper[5014]: I0228 04:38:07.884794 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f5ec05c-1bc0-41ad-9135-05564f8e3192-var-lock\") pod \"installer-9-crc\" (UID: \"8f5ec05c-1bc0-41ad-9135-05564f8e3192\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 04:38:07 crc kubenswrapper[5014]: I0228 04:38:07.884939 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f5ec05c-1bc0-41ad-9135-05564f8e3192-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8f5ec05c-1bc0-41ad-9135-05564f8e3192\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 04:38:07 crc kubenswrapper[5014]: I0228 04:38:07.884997 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f5ec05c-1bc0-41ad-9135-05564f8e3192-var-lock\") pod \"installer-9-crc\" (UID: \"8f5ec05c-1bc0-41ad-9135-05564f8e3192\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 04:38:07 crc kubenswrapper[5014]: I0228 04:38:07.914186 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f5ec05c-1bc0-41ad-9135-05564f8e3192-kube-api-access\") pod \"installer-9-crc\" (UID: \"8f5ec05c-1bc0-41ad-9135-05564f8e3192\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 28 04:38:07 crc kubenswrapper[5014]: I0228 04:38:07.988784 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 28 04:38:09 crc kubenswrapper[5014]: I0228 04:38:09.745221 5014 patch_prober.go:28] interesting pod/controller-manager-686c8df7df-28k2q container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 28 04:38:09 crc kubenswrapper[5014]: I0228 04:38:09.745367 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" podUID="ed2fa586-d060-42df-9ef2-163f0dfa7c96" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.354695 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.410000 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5676c779b9-4hc57"] Feb 28 04:38:13 crc kubenswrapper[5014]: E0228 04:38:13.410326 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed2fa586-d060-42df-9ef2-163f0dfa7c96" containerName="controller-manager" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.410341 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed2fa586-d060-42df-9ef2-163f0dfa7c96" containerName="controller-manager" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.410435 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed2fa586-d060-42df-9ef2-163f0dfa7c96" containerName="controller-manager" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.410987 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.412753 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5676c779b9-4hc57"] Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.488411 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed2fa586-d060-42df-9ef2-163f0dfa7c96-serving-cert\") pod \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\" (UID: \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\") " Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.488535 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5twq\" (UniqueName: \"kubernetes.io/projected/ed2fa586-d060-42df-9ef2-163f0dfa7c96-kube-api-access-t5twq\") pod \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\" (UID: \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\") " Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.488591 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed2fa586-d060-42df-9ef2-163f0dfa7c96-proxy-ca-bundles\") pod \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\" (UID: \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\") " Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.488629 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed2fa586-d060-42df-9ef2-163f0dfa7c96-client-ca\") pod \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\" (UID: \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\") " Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.488653 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed2fa586-d060-42df-9ef2-163f0dfa7c96-config\") pod \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\" (UID: \"ed2fa586-d060-42df-9ef2-163f0dfa7c96\") " Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.488869 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59b2k\" (UniqueName: \"kubernetes.io/projected/599d6a93-0b00-42e3-9dee-37a3888acf48-kube-api-access-59b2k\") pod \"controller-manager-5676c779b9-4hc57\" (UID: \"599d6a93-0b00-42e3-9dee-37a3888acf48\") " pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.488969 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/599d6a93-0b00-42e3-9dee-37a3888acf48-proxy-ca-bundles\") pod \"controller-manager-5676c779b9-4hc57\" (UID: \"599d6a93-0b00-42e3-9dee-37a3888acf48\") " pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.489008 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/599d6a93-0b00-42e3-9dee-37a3888acf48-serving-cert\") pod \"controller-manager-5676c779b9-4hc57\" (UID: \"599d6a93-0b00-42e3-9dee-37a3888acf48\") " pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.489113 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/599d6a93-0b00-42e3-9dee-37a3888acf48-client-ca\") pod \"controller-manager-5676c779b9-4hc57\" (UID: \"599d6a93-0b00-42e3-9dee-37a3888acf48\") " pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.489147 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/599d6a93-0b00-42e3-9dee-37a3888acf48-config\") pod \"controller-manager-5676c779b9-4hc57\" (UID: \"599d6a93-0b00-42e3-9dee-37a3888acf48\") " pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.490033 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed2fa586-d060-42df-9ef2-163f0dfa7c96-config" (OuterVolumeSpecName: "config") pod "ed2fa586-d060-42df-9ef2-163f0dfa7c96" (UID: "ed2fa586-d060-42df-9ef2-163f0dfa7c96"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.494163 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed2fa586-d060-42df-9ef2-163f0dfa7c96-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ed2fa586-d060-42df-9ef2-163f0dfa7c96" (UID: "ed2fa586-d060-42df-9ef2-163f0dfa7c96"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.494160 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ed2fa586-d060-42df-9ef2-163f0dfa7c96-client-ca" (OuterVolumeSpecName: "client-ca") pod "ed2fa586-d060-42df-9ef2-163f0dfa7c96" (UID: "ed2fa586-d060-42df-9ef2-163f0dfa7c96"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.498258 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed2fa586-d060-42df-9ef2-163f0dfa7c96-kube-api-access-t5twq" (OuterVolumeSpecName: "kube-api-access-t5twq") pod "ed2fa586-d060-42df-9ef2-163f0dfa7c96" (UID: "ed2fa586-d060-42df-9ef2-163f0dfa7c96"). InnerVolumeSpecName "kube-api-access-t5twq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.504069 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed2fa586-d060-42df-9ef2-163f0dfa7c96-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ed2fa586-d060-42df-9ef2-163f0dfa7c96" (UID: "ed2fa586-d060-42df-9ef2-163f0dfa7c96"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.590189 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/599d6a93-0b00-42e3-9dee-37a3888acf48-config\") pod \"controller-manager-5676c779b9-4hc57\" (UID: \"599d6a93-0b00-42e3-9dee-37a3888acf48\") " pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.590276 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59b2k\" (UniqueName: \"kubernetes.io/projected/599d6a93-0b00-42e3-9dee-37a3888acf48-kube-api-access-59b2k\") pod \"controller-manager-5676c779b9-4hc57\" (UID: \"599d6a93-0b00-42e3-9dee-37a3888acf48\") " pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.590324 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/599d6a93-0b00-42e3-9dee-37a3888acf48-proxy-ca-bundles\") pod \"controller-manager-5676c779b9-4hc57\" (UID: \"599d6a93-0b00-42e3-9dee-37a3888acf48\") " pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.590376 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/599d6a93-0b00-42e3-9dee-37a3888acf48-serving-cert\") pod \"controller-manager-5676c779b9-4hc57\" (UID: \"599d6a93-0b00-42e3-9dee-37a3888acf48\") " pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.590582 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/599d6a93-0b00-42e3-9dee-37a3888acf48-client-ca\") pod \"controller-manager-5676c779b9-4hc57\" (UID: \"599d6a93-0b00-42e3-9dee-37a3888acf48\") " pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.590649 5014 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ed2fa586-d060-42df-9ef2-163f0dfa7c96-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.590672 5014 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ed2fa586-d060-42df-9ef2-163f0dfa7c96-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.590685 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed2fa586-d060-42df-9ef2-163f0dfa7c96-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.590698 5014 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed2fa586-d060-42df-9ef2-163f0dfa7c96-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.590713 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5twq\" (UniqueName: \"kubernetes.io/projected/ed2fa586-d060-42df-9ef2-163f0dfa7c96-kube-api-access-t5twq\") on node \"crc\" DevicePath \"\"" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.591763 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/599d6a93-0b00-42e3-9dee-37a3888acf48-client-ca\") pod \"controller-manager-5676c779b9-4hc57\" (UID: \"599d6a93-0b00-42e3-9dee-37a3888acf48\") " pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.592067 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/599d6a93-0b00-42e3-9dee-37a3888acf48-proxy-ca-bundles\") pod \"controller-manager-5676c779b9-4hc57\" (UID: \"599d6a93-0b00-42e3-9dee-37a3888acf48\") " pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.592653 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/599d6a93-0b00-42e3-9dee-37a3888acf48-config\") pod \"controller-manager-5676c779b9-4hc57\" (UID: \"599d6a93-0b00-42e3-9dee-37a3888acf48\") " pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.598629 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/599d6a93-0b00-42e3-9dee-37a3888acf48-serving-cert\") pod \"controller-manager-5676c779b9-4hc57\" (UID: \"599d6a93-0b00-42e3-9dee-37a3888acf48\") " pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.608787 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59b2k\" (UniqueName: \"kubernetes.io/projected/599d6a93-0b00-42e3-9dee-37a3888acf48-kube-api-access-59b2k\") pod \"controller-manager-5676c779b9-4hc57\" (UID: \"599d6a93-0b00-42e3-9dee-37a3888acf48\") " pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.731777 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" event={"ID":"ed2fa586-d060-42df-9ef2-163f0dfa7c96","Type":"ContainerDied","Data":"3541dae7e66bd8fb3bc307c7fc362f519fb8bec9f9ed00ecc749c856e36b44a7"} Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.731934 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-686c8df7df-28k2q" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.739721 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.763854 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-686c8df7df-28k2q"] Feb 28 04:38:13 crc kubenswrapper[5014]: I0228 04:38:13.766476 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-686c8df7df-28k2q"] Feb 28 04:38:14 crc kubenswrapper[5014]: I0228 04:38:14.182216 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed2fa586-d060-42df-9ef2-163f0dfa7c96" path="/var/lib/kubelet/pods/ed2fa586-d060-42df-9ef2-163f0dfa7c96/volumes" Feb 28 04:38:14 crc kubenswrapper[5014]: E0228 04:38:14.558486 5014 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 28 04:38:14 crc kubenswrapper[5014]: E0228 04:38:14.558795 5014 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 04:38:14 crc kubenswrapper[5014]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 28 04:38:14 crc kubenswrapper[5014]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hxk7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537556-wwqxk_openshift-infra(d84dec61-f4ef-4e0b-adb1-66694017a156): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Feb 28 04:38:14 crc kubenswrapper[5014]: > logger="UnhandledError" Feb 28 04:38:14 crc kubenswrapper[5014]: E0228 04:38:14.559977 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29537556-wwqxk" podUID="d84dec61-f4ef-4e0b-adb1-66694017a156" Feb 28 04:38:14 crc kubenswrapper[5014]: E0228 04:38:14.739486 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537556-wwqxk" podUID="d84dec61-f4ef-4e0b-adb1-66694017a156" Feb 28 04:38:14 crc kubenswrapper[5014]: I0228 04:38:14.887648 5014 scope.go:117] "RemoveContainer" containerID="5581f74c2e97579db05acee5d15062efb3164bb2ef1e1652a7eb32cdab02ab0f" Feb 28 04:38:15 crc kubenswrapper[5014]: I0228 04:38:15.706721 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:38:15 crc kubenswrapper[5014]: I0228 04:38:15.707355 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:38:15 crc kubenswrapper[5014]: I0228 04:38:15.716971 5014 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 04:38:15 crc kubenswrapper[5014]: I0228 04:38:15.718099 5014 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550"} pod="openshift-machine-config-operator/machine-config-daemon-cct62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 04:38:15 crc kubenswrapper[5014]: I0228 04:38:15.718182 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" containerID="cri-o://40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550" gracePeriod=600 Feb 28 04:38:16 crc kubenswrapper[5014]: I0228 04:38:16.754500 5014 generic.go:334] "Generic (PLEG): container finished" podID="6aad0009-d904-48f8-8e30-82205907ece1" containerID="40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550" exitCode=0 Feb 28 04:38:16 crc kubenswrapper[5014]: I0228 04:38:16.754601 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerDied","Data":"40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550"} Feb 28 04:38:18 crc kubenswrapper[5014]: I0228 04:38:18.113750 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537558-4hwb5"] Feb 28 04:38:18 crc kubenswrapper[5014]: I0228 04:38:18.377724 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 28 04:38:18 crc kubenswrapper[5014]: I0228 04:38:18.392673 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-rqllg"] Feb 28 04:38:21 crc kubenswrapper[5014]: E0228 04:38:21.916258 5014 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 28 04:38:21 crc kubenswrapper[5014]: E0228 04:38:21.917392 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-725hc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-5zq82_openshift-marketplace(bba9702f-9e04-46d4-9a98-92d5303383c4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 04:38:21 crc kubenswrapper[5014]: E0228 04:38:21.919740 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-5zq82" podUID="bba9702f-9e04-46d4-9a98-92d5303383c4" Feb 28 04:38:23 crc kubenswrapper[5014]: E0228 04:38:23.492550 5014 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 04:38:23 crc kubenswrapper[5014]: E0228 04:38:23.492794 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nvjgv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-kx627_openshift-marketplace(bd99ec6a-5237-42f9-81ad-bd813d262c6d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 04:38:23 crc kubenswrapper[5014]: E0228 04:38:23.493984 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-kx627" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" Feb 28 04:38:25 crc kubenswrapper[5014]: E0228 04:38:25.678562 5014 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 28 04:38:25 crc kubenswrapper[5014]: E0228 04:38:25.679147 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkdvt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9cznf_openshift-marketplace(52079806-fc0c-4852-8150-0123d376c1b2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 04:38:25 crc kubenswrapper[5014]: E0228 04:38:25.680303 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-9cznf" podUID="52079806-fc0c-4852-8150-0123d376c1b2" Feb 28 04:38:25 crc kubenswrapper[5014]: E0228 04:38:25.695528 5014 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 28 04:38:25 crc kubenswrapper[5014]: E0228 04:38:25.695681 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vmkg5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-sqfvs_openshift-marketplace(50cf3400-fb73-4038-b616-2d3559aaf784): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 04:38:25 crc kubenswrapper[5014]: E0228 04:38:25.696821 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-sqfvs" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" Feb 28 04:38:25 crc kubenswrapper[5014]: E0228 04:38:25.738253 5014 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 28 04:38:25 crc kubenswrapper[5014]: E0228 04:38:25.738550 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6q5qx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hx7qb_openshift-marketplace(3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 04:38:25 crc kubenswrapper[5014]: E0228 04:38:25.739958 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-hx7qb" podUID="3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4" Feb 28 04:38:27 crc kubenswrapper[5014]: W0228 04:38:27.505237 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podfe2969a7_da7d_4775_85cd_457fa5467c79.slice/crio-019092007fbe017319c6a778db75f153cae1cdf8d0a46018679cca12f76da34b WatchSource:0}: Error finding container 019092007fbe017319c6a778db75f153cae1cdf8d0a46018679cca12f76da34b: Status 404 returned error can't find the container with id 019092007fbe017319c6a778db75f153cae1cdf8d0a46018679cca12f76da34b Feb 28 04:38:27 crc kubenswrapper[5014]: E0228 04:38:27.510950 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-5zq82" podUID="bba9702f-9e04-46d4-9a98-92d5303383c4" Feb 28 04:38:27 crc kubenswrapper[5014]: E0228 04:38:27.513491 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-sqfvs" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" Feb 28 04:38:27 crc kubenswrapper[5014]: E0228 04:38:27.513544 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-kx627" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" Feb 28 04:38:27 crc kubenswrapper[5014]: E0228 04:38:27.513597 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9cznf" podUID="52079806-fc0c-4852-8150-0123d376c1b2" Feb 28 04:38:27 crc kubenswrapper[5014]: E0228 04:38:27.514608 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-hx7qb" podUID="3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4" Feb 28 04:38:27 crc kubenswrapper[5014]: E0228 04:38:27.525192 5014 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 28 04:38:27 crc kubenswrapper[5014]: E0228 04:38:27.525369 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tr7z5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-npdf6_openshift-marketplace(8a00f74f-e858-42cc-b882-492afd45684d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 04:38:27 crc kubenswrapper[5014]: E0228 04:38:27.526783 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-npdf6" podUID="8a00f74f-e858-42cc-b882-492afd45684d" Feb 28 04:38:27 crc kubenswrapper[5014]: E0228 04:38:27.562987 5014 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 28 04:38:27 crc kubenswrapper[5014]: E0228 04:38:27.563161 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5kz2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-64ktb_openshift-marketplace(d19bc223-12d6-45a9-87de-31ec3b6d9557): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 04:38:27 crc kubenswrapper[5014]: E0228 04:38:27.564337 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-64ktb" podUID="d19bc223-12d6-45a9-87de-31ec3b6d9557" Feb 28 04:38:27 crc kubenswrapper[5014]: E0228 04:38:27.595360 5014 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 28 04:38:27 crc kubenswrapper[5014]: E0228 04:38:27.595683 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rqtc5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-r5h8g_openshift-marketplace(7bdb5d29-5a4c-4358-a276-58efd08a8655): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 28 04:38:27 crc kubenswrapper[5014]: E0228 04:38:27.600012 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-r5h8g" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" Feb 28 04:38:27 crc kubenswrapper[5014]: I0228 04:38:27.689630 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l"] Feb 28 04:38:27 crc kubenswrapper[5014]: W0228 04:38:27.712453 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b91ec29_c4f0_4689_80d7_2d7d8a5f0f72.slice/crio-80df16194d7b85ce76371be160cbc77ffafe6f6f44096f2fb1232bcf9375914f WatchSource:0}: Error finding container 80df16194d7b85ce76371be160cbc77ffafe6f6f44096f2fb1232bcf9375914f: Status 404 returned error can't find the container with id 80df16194d7b85ce76371be160cbc77ffafe6f6f44096f2fb1232bcf9375914f Feb 28 04:38:27 crc kubenswrapper[5014]: I0228 04:38:27.732761 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 28 04:38:27 crc kubenswrapper[5014]: I0228 04:38:27.811233 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8f5ec05c-1bc0-41ad-9135-05564f8e3192","Type":"ContainerStarted","Data":"974818a21d93f8dda628b92213034001bdbbd91ea3c870f0e8321c8ee30129b1"} Feb 28 04:38:27 crc kubenswrapper[5014]: I0228 04:38:27.814576 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fe2969a7-da7d-4775-85cd-457fa5467c79","Type":"ContainerStarted","Data":"019092007fbe017319c6a778db75f153cae1cdf8d0a46018679cca12f76da34b"} Feb 28 04:38:27 crc kubenswrapper[5014]: I0228 04:38:27.815840 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" event={"ID":"3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72","Type":"ContainerStarted","Data":"80df16194d7b85ce76371be160cbc77ffafe6f6f44096f2fb1232bcf9375914f"} Feb 28 04:38:27 crc kubenswrapper[5014]: I0228 04:38:27.817526 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537558-4hwb5" event={"ID":"1943af29-93f0-470e-85e8-4d53409329ae","Type":"ContainerStarted","Data":"93325dc6d25a07fef67b85eae974068ea4a89e67825c24d7361c2932d7296107"} Feb 28 04:38:27 crc kubenswrapper[5014]: I0228 04:38:27.827642 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerStarted","Data":"3c2b8713a83a979e30942a4af450ca8224f253d52fbaf4696ad56965a2752095"} Feb 28 04:38:27 crc kubenswrapper[5014]: I0228 04:38:27.835150 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rqllg" event={"ID":"a2258094-df28-401d-aa20-0931bedcb66b","Type":"ContainerStarted","Data":"7577c361d552abaa00c2175377c17d5d16cf9d65f69089364b17118cb44d3105"} Feb 28 04:38:27 crc kubenswrapper[5014]: E0228 04:38:27.836743 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-64ktb" podUID="d19bc223-12d6-45a9-87de-31ec3b6d9557" Feb 28 04:38:27 crc kubenswrapper[5014]: E0228 04:38:27.838242 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-npdf6" podUID="8a00f74f-e858-42cc-b882-492afd45684d" Feb 28 04:38:27 crc kubenswrapper[5014]: E0228 04:38:27.838596 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-r5h8g" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" Feb 28 04:38:27 crc kubenswrapper[5014]: I0228 04:38:27.986318 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5676c779b9-4hc57"] Feb 28 04:38:28 crc kubenswrapper[5014]: I0228 04:38:28.843624 5014 generic.go:334] "Generic (PLEG): container finished" podID="fe2969a7-da7d-4775-85cd-457fa5467c79" containerID="01f0414450c2f4c88b9b1794c06c559a45653f65299ad51e7f2e9633530ad781" exitCode=0 Feb 28 04:38:28 crc kubenswrapper[5014]: I0228 04:38:28.843717 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fe2969a7-da7d-4775-85cd-457fa5467c79","Type":"ContainerDied","Data":"01f0414450c2f4c88b9b1794c06c559a45653f65299ad51e7f2e9633530ad781"} Feb 28 04:38:28 crc kubenswrapper[5014]: I0228 04:38:28.846317 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" event={"ID":"3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72","Type":"ContainerStarted","Data":"93b29a1d9832cd1eb109b729786306ef964d4dcc0f2077f03511cd4c9ae2d904"} Feb 28 04:38:28 crc kubenswrapper[5014]: I0228 04:38:28.846537 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" Feb 28 04:38:28 crc kubenswrapper[5014]: I0228 04:38:28.848907 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" event={"ID":"599d6a93-0b00-42e3-9dee-37a3888acf48","Type":"ContainerStarted","Data":"70fa4e9f174d1e63431b5539de4ef8f9af45cb00517c2219093e66ef13723855"} Feb 28 04:38:28 crc kubenswrapper[5014]: I0228 04:38:28.848939 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" event={"ID":"599d6a93-0b00-42e3-9dee-37a3888acf48","Type":"ContainerStarted","Data":"1c6c6199f17b57f9bede3185181cff746136d12d45ea8998d3b98f4ba56a655d"} Feb 28 04:38:28 crc kubenswrapper[5014]: I0228 04:38:28.849061 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" Feb 28 04:38:28 crc kubenswrapper[5014]: I0228 04:38:28.851082 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rqllg" event={"ID":"a2258094-df28-401d-aa20-0931bedcb66b","Type":"ContainerStarted","Data":"e28a9395fa35ff5b1bc39a9ad915cd2a01a3c3ce0f8dd435d3ce931da9069197"} Feb 28 04:38:28 crc kubenswrapper[5014]: I0228 04:38:28.851126 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-rqllg" event={"ID":"a2258094-df28-401d-aa20-0931bedcb66b","Type":"ContainerStarted","Data":"fef3a4ccd0ae331e8fa1ac95aac690c67e8cb0674cf2f3c224edf7657eeef5ae"} Feb 28 04:38:28 crc kubenswrapper[5014]: I0228 04:38:28.852447 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" Feb 28 04:38:28 crc kubenswrapper[5014]: I0228 04:38:28.852789 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8f5ec05c-1bc0-41ad-9135-05564f8e3192","Type":"ContainerStarted","Data":"b43bc55a5e0695fb54bfea9bfecc58aa6544d8b5004904ffba28a49556abd9d2"} Feb 28 04:38:28 crc kubenswrapper[5014]: I0228 04:38:28.853096 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" Feb 28 04:38:28 crc kubenswrapper[5014]: I0228 04:38:28.881157 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" podStartSLOduration=23.881135959 podStartE2EDuration="23.881135959s" podCreationTimestamp="2026-02-28 04:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:38:28.879579003 +0000 UTC m=+297.549704913" watchObservedRunningTime="2026-02-28 04:38:28.881135959 +0000 UTC m=+297.551261869" Feb 28 04:38:28 crc kubenswrapper[5014]: I0228 04:38:28.920629 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=21.920594553 podStartE2EDuration="21.920594553s" podCreationTimestamp="2026-02-28 04:38:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:38:28.919533871 +0000 UTC m=+297.589659781" watchObservedRunningTime="2026-02-28 04:38:28.920594553 +0000 UTC m=+297.590720463" Feb 28 04:38:28 crc kubenswrapper[5014]: I0228 04:38:28.924447 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" podStartSLOduration=23.924429616 podStartE2EDuration="23.924429616s" podCreationTimestamp="2026-02-28 04:38:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:38:28.906194697 +0000 UTC m=+297.576320617" watchObservedRunningTime="2026-02-28 04:38:28.924429616 +0000 UTC m=+297.594555546" Feb 28 04:38:28 crc kubenswrapper[5014]: I0228 04:38:28.949391 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-rqllg" podStartSLOduration=247.94934438 podStartE2EDuration="4m7.94934438s" podCreationTimestamp="2026-02-28 04:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:38:28.940597853 +0000 UTC m=+297.610723763" watchObservedRunningTime="2026-02-28 04:38:28.94934438 +0000 UTC m=+297.619470310" Feb 28 04:38:29 crc kubenswrapper[5014]: I0228 04:38:29.352631 5014 csr.go:261] certificate signing request csr-xhnrp is approved, waiting to be issued Feb 28 04:38:29 crc kubenswrapper[5014]: I0228 04:38:29.361564 5014 csr.go:257] certificate signing request csr-xhnrp is issued Feb 28 04:38:29 crc kubenswrapper[5014]: I0228 04:38:29.858536 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537556-wwqxk" event={"ID":"d84dec61-f4ef-4e0b-adb1-66694017a156","Type":"ContainerStarted","Data":"c9871dfb2c0a80b9a516f34a24f0ee67574f66f811ec1a3cc30dd3d8b7578a01"} Feb 28 04:38:29 crc kubenswrapper[5014]: I0228 04:38:29.859716 5014 generic.go:334] "Generic (PLEG): container finished" podID="1943af29-93f0-470e-85e8-4d53409329ae" containerID="b45257578421382e8bcd79d70bbd064942c27c04cece4d5bcd45a77fe67a4811" exitCode=0 Feb 28 04:38:29 crc kubenswrapper[5014]: I0228 04:38:29.859853 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537558-4hwb5" event={"ID":"1943af29-93f0-470e-85e8-4d53409329ae","Type":"ContainerDied","Data":"b45257578421382e8bcd79d70bbd064942c27c04cece4d5bcd45a77fe67a4811"} Feb 28 04:38:29 crc kubenswrapper[5014]: I0228 04:38:29.877220 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537556-wwqxk" podStartSLOduration=80.709856438 podStartE2EDuration="2m29.877197416s" podCreationTimestamp="2026-02-28 04:36:00 +0000 UTC" firstStartedPulling="2026-02-28 04:37:20.372997587 +0000 UTC m=+229.043123497" lastFinishedPulling="2026-02-28 04:38:29.540338565 +0000 UTC m=+298.210464475" observedRunningTime="2026-02-28 04:38:29.872667492 +0000 UTC m=+298.542793402" watchObservedRunningTime="2026-02-28 04:38:29.877197416 +0000 UTC m=+298.547323346" Feb 28 04:38:30 crc kubenswrapper[5014]: I0228 04:38:30.170185 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 04:38:30 crc kubenswrapper[5014]: I0228 04:38:30.289063 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fe2969a7-da7d-4775-85cd-457fa5467c79-kube-api-access\") pod \"fe2969a7-da7d-4775-85cd-457fa5467c79\" (UID: \"fe2969a7-da7d-4775-85cd-457fa5467c79\") " Feb 28 04:38:30 crc kubenswrapper[5014]: I0228 04:38:30.289139 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fe2969a7-da7d-4775-85cd-457fa5467c79-kubelet-dir\") pod \"fe2969a7-da7d-4775-85cd-457fa5467c79\" (UID: \"fe2969a7-da7d-4775-85cd-457fa5467c79\") " Feb 28 04:38:30 crc kubenswrapper[5014]: I0228 04:38:30.289283 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe2969a7-da7d-4775-85cd-457fa5467c79-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fe2969a7-da7d-4775-85cd-457fa5467c79" (UID: "fe2969a7-da7d-4775-85cd-457fa5467c79"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:38:30 crc kubenswrapper[5014]: I0228 04:38:30.289727 5014 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fe2969a7-da7d-4775-85cd-457fa5467c79-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 28 04:38:30 crc kubenswrapper[5014]: I0228 04:38:30.298704 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe2969a7-da7d-4775-85cd-457fa5467c79-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fe2969a7-da7d-4775-85cd-457fa5467c79" (UID: "fe2969a7-da7d-4775-85cd-457fa5467c79"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:38:30 crc kubenswrapper[5014]: I0228 04:38:30.363579 5014 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-20 03:15:33.503236278 +0000 UTC Feb 28 04:38:30 crc kubenswrapper[5014]: I0228 04:38:30.363637 5014 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7078h37m3.13960236s for next certificate rotation Feb 28 04:38:30 crc kubenswrapper[5014]: I0228 04:38:30.391707 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fe2969a7-da7d-4775-85cd-457fa5467c79-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 04:38:30 crc kubenswrapper[5014]: I0228 04:38:30.867471 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"fe2969a7-da7d-4775-85cd-457fa5467c79","Type":"ContainerDied","Data":"019092007fbe017319c6a778db75f153cae1cdf8d0a46018679cca12f76da34b"} Feb 28 04:38:30 crc kubenswrapper[5014]: I0228 04:38:30.867540 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="019092007fbe017319c6a778db75f153cae1cdf8d0a46018679cca12f76da34b" Feb 28 04:38:30 crc kubenswrapper[5014]: I0228 04:38:30.867490 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 28 04:38:30 crc kubenswrapper[5014]: I0228 04:38:30.869062 5014 generic.go:334] "Generic (PLEG): container finished" podID="d84dec61-f4ef-4e0b-adb1-66694017a156" containerID="c9871dfb2c0a80b9a516f34a24f0ee67574f66f811ec1a3cc30dd3d8b7578a01" exitCode=0 Feb 28 04:38:30 crc kubenswrapper[5014]: I0228 04:38:30.869169 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537556-wwqxk" event={"ID":"d84dec61-f4ef-4e0b-adb1-66694017a156","Type":"ContainerDied","Data":"c9871dfb2c0a80b9a516f34a24f0ee67574f66f811ec1a3cc30dd3d8b7578a01"} Feb 28 04:38:31 crc kubenswrapper[5014]: I0228 04:38:31.187220 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537558-4hwb5" Feb 28 04:38:31 crc kubenswrapper[5014]: I0228 04:38:31.305333 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cvst\" (UniqueName: \"kubernetes.io/projected/1943af29-93f0-470e-85e8-4d53409329ae-kube-api-access-6cvst\") pod \"1943af29-93f0-470e-85e8-4d53409329ae\" (UID: \"1943af29-93f0-470e-85e8-4d53409329ae\") " Feb 28 04:38:31 crc kubenswrapper[5014]: I0228 04:38:31.311747 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1943af29-93f0-470e-85e8-4d53409329ae-kube-api-access-6cvst" (OuterVolumeSpecName: "kube-api-access-6cvst") pod "1943af29-93f0-470e-85e8-4d53409329ae" (UID: "1943af29-93f0-470e-85e8-4d53409329ae"). InnerVolumeSpecName "kube-api-access-6cvst". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:38:31 crc kubenswrapper[5014]: I0228 04:38:31.364096 5014 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-23 08:17:35.672949253 +0000 UTC Feb 28 04:38:31 crc kubenswrapper[5014]: I0228 04:38:31.364142 5014 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7155h39m4.308809907s for next certificate rotation Feb 28 04:38:31 crc kubenswrapper[5014]: I0228 04:38:31.409695 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cvst\" (UniqueName: \"kubernetes.io/projected/1943af29-93f0-470e-85e8-4d53409329ae-kube-api-access-6cvst\") on node \"crc\" DevicePath \"\"" Feb 28 04:38:31 crc kubenswrapper[5014]: I0228 04:38:31.876737 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537558-4hwb5" event={"ID":"1943af29-93f0-470e-85e8-4d53409329ae","Type":"ContainerDied","Data":"93325dc6d25a07fef67b85eae974068ea4a89e67825c24d7361c2932d7296107"} Feb 28 04:38:31 crc kubenswrapper[5014]: I0228 04:38:31.876796 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93325dc6d25a07fef67b85eae974068ea4a89e67825c24d7361c2932d7296107" Feb 28 04:38:31 crc kubenswrapper[5014]: I0228 04:38:31.878876 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537558-4hwb5" Feb 28 04:38:32 crc kubenswrapper[5014]: I0228 04:38:32.166358 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537556-wwqxk" Feb 28 04:38:32 crc kubenswrapper[5014]: I0228 04:38:32.347654 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxk7d\" (UniqueName: \"kubernetes.io/projected/d84dec61-f4ef-4e0b-adb1-66694017a156-kube-api-access-hxk7d\") pod \"d84dec61-f4ef-4e0b-adb1-66694017a156\" (UID: \"d84dec61-f4ef-4e0b-adb1-66694017a156\") " Feb 28 04:38:32 crc kubenswrapper[5014]: I0228 04:38:32.355038 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d84dec61-f4ef-4e0b-adb1-66694017a156-kube-api-access-hxk7d" (OuterVolumeSpecName: "kube-api-access-hxk7d") pod "d84dec61-f4ef-4e0b-adb1-66694017a156" (UID: "d84dec61-f4ef-4e0b-adb1-66694017a156"). InnerVolumeSpecName "kube-api-access-hxk7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:38:32 crc kubenswrapper[5014]: I0228 04:38:32.450413 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxk7d\" (UniqueName: \"kubernetes.io/projected/d84dec61-f4ef-4e0b-adb1-66694017a156-kube-api-access-hxk7d\") on node \"crc\" DevicePath \"\"" Feb 28 04:38:32 crc kubenswrapper[5014]: I0228 04:38:32.883442 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537556-wwqxk" event={"ID":"d84dec61-f4ef-4e0b-adb1-66694017a156","Type":"ContainerDied","Data":"cd8b5bfc02c146d03f03c977c5b5691b77d82835da0fe2746707b5f206faeff9"} Feb 28 04:38:32 crc kubenswrapper[5014]: I0228 04:38:32.883496 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd8b5bfc02c146d03f03c977c5b5691b77d82835da0fe2746707b5f206faeff9" Feb 28 04:38:32 crc kubenswrapper[5014]: I0228 04:38:32.883530 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537556-wwqxk" Feb 28 04:38:39 crc kubenswrapper[5014]: I0228 04:38:39.932827 5014 generic.go:334] "Generic (PLEG): container finished" podID="d19bc223-12d6-45a9-87de-31ec3b6d9557" containerID="5b7e08a50df10538a739eca6fd667f4f2c634b8b4a4f22b7ed6b3230c1cb145d" exitCode=0 Feb 28 04:38:39 crc kubenswrapper[5014]: I0228 04:38:39.932895 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64ktb" event={"ID":"d19bc223-12d6-45a9-87de-31ec3b6d9557","Type":"ContainerDied","Data":"5b7e08a50df10538a739eca6fd667f4f2c634b8b4a4f22b7ed6b3230c1cb145d"} Feb 28 04:38:39 crc kubenswrapper[5014]: I0228 04:38:39.939620 5014 generic.go:334] "Generic (PLEG): container finished" podID="8a00f74f-e858-42cc-b882-492afd45684d" containerID="533d9b88cb01043fe2246d71f096ac074cbbd32a000590d3da5a19183cc335c4" exitCode=0 Feb 28 04:38:39 crc kubenswrapper[5014]: I0228 04:38:39.939723 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npdf6" event={"ID":"8a00f74f-e858-42cc-b882-492afd45684d","Type":"ContainerDied","Data":"533d9b88cb01043fe2246d71f096ac074cbbd32a000590d3da5a19183cc335c4"} Feb 28 04:38:39 crc kubenswrapper[5014]: I0228 04:38:39.943553 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5h8g" event={"ID":"7bdb5d29-5a4c-4358-a276-58efd08a8655","Type":"ContainerStarted","Data":"8b6782a79650a24ca2f71158a12302b475127f23b5ff3ccdd142cf0f7240217b"} Feb 28 04:38:40 crc kubenswrapper[5014]: I0228 04:38:40.959375 5014 generic.go:334] "Generic (PLEG): container finished" podID="7bdb5d29-5a4c-4358-a276-58efd08a8655" containerID="8b6782a79650a24ca2f71158a12302b475127f23b5ff3ccdd142cf0f7240217b" exitCode=0 Feb 28 04:38:40 crc kubenswrapper[5014]: I0228 04:38:40.959459 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5h8g" event={"ID":"7bdb5d29-5a4c-4358-a276-58efd08a8655","Type":"ContainerDied","Data":"8b6782a79650a24ca2f71158a12302b475127f23b5ff3ccdd142cf0f7240217b"} Feb 28 04:38:43 crc kubenswrapper[5014]: I0228 04:38:43.982997 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64ktb" event={"ID":"d19bc223-12d6-45a9-87de-31ec3b6d9557","Type":"ContainerStarted","Data":"5bc192e19a4c6a2f99a2bf686132fe8881814038a7ce7a2333f9d36839466dab"} Feb 28 04:38:44 crc kubenswrapper[5014]: I0228 04:38:44.016440 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-64ktb" podStartSLOduration=3.453543427 podStartE2EDuration="1m17.016407977s" podCreationTimestamp="2026-02-28 04:37:27 +0000 UTC" firstStartedPulling="2026-02-28 04:37:29.218567676 +0000 UTC m=+237.888693586" lastFinishedPulling="2026-02-28 04:38:42.781432226 +0000 UTC m=+311.451558136" observedRunningTime="2026-02-28 04:38:44.010697639 +0000 UTC m=+312.680823559" watchObservedRunningTime="2026-02-28 04:38:44.016407977 +0000 UTC m=+312.686533917" Feb 28 04:38:45 crc kubenswrapper[5014]: I0228 04:38:45.078648 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5676c779b9-4hc57"] Feb 28 04:38:45 crc kubenswrapper[5014]: I0228 04:38:45.079270 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" podUID="599d6a93-0b00-42e3-9dee-37a3888acf48" containerName="controller-manager" containerID="cri-o://70fa4e9f174d1e63431b5539de4ef8f9af45cb00517c2219093e66ef13723855" gracePeriod=30 Feb 28 04:38:45 crc kubenswrapper[5014]: I0228 04:38:45.185969 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l"] Feb 28 04:38:45 crc kubenswrapper[5014]: I0228 04:38:45.186275 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" podUID="3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72" containerName="route-controller-manager" containerID="cri-o://93b29a1d9832cd1eb109b729786306ef964d4dcc0f2077f03511cd4c9ae2d904" gracePeriod=30 Feb 28 04:38:46 crc kubenswrapper[5014]: I0228 04:38:46.003122 5014 generic.go:334] "Generic (PLEG): container finished" podID="599d6a93-0b00-42e3-9dee-37a3888acf48" containerID="70fa4e9f174d1e63431b5539de4ef8f9af45cb00517c2219093e66ef13723855" exitCode=0 Feb 28 04:38:46 crc kubenswrapper[5014]: I0228 04:38:46.003275 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" event={"ID":"599d6a93-0b00-42e3-9dee-37a3888acf48","Type":"ContainerDied","Data":"70fa4e9f174d1e63431b5539de4ef8f9af45cb00517c2219093e66ef13723855"} Feb 28 04:38:46 crc kubenswrapper[5014]: I0228 04:38:46.005752 5014 generic.go:334] "Generic (PLEG): container finished" podID="3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72" containerID="93b29a1d9832cd1eb109b729786306ef964d4dcc0f2077f03511cd4c9ae2d904" exitCode=0 Feb 28 04:38:46 crc kubenswrapper[5014]: I0228 04:38:46.005828 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" event={"ID":"3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72","Type":"ContainerDied","Data":"93b29a1d9832cd1eb109b729786306ef964d4dcc0f2077f03511cd4c9ae2d904"} Feb 28 04:38:46 crc kubenswrapper[5014]: I0228 04:38:46.779997 5014 patch_prober.go:28] interesting pod/route-controller-manager-866776bd7-58b5l container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Feb 28 04:38:46 crc kubenswrapper[5014]: I0228 04:38:46.780089 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" podUID="3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Feb 28 04:38:48 crc kubenswrapper[5014]: I0228 04:38:48.133347 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-64ktb" Feb 28 04:38:48 crc kubenswrapper[5014]: I0228 04:38:48.135028 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-64ktb" Feb 28 04:38:49 crc kubenswrapper[5014]: I0228 04:38:49.795295 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-64ktb" podUID="d19bc223-12d6-45a9-87de-31ec3b6d9557" containerName="registry-server" probeResult="failure" output=< Feb 28 04:38:49 crc kubenswrapper[5014]: timeout: failed to connect service ":50051" within 1s Feb 28 04:38:49 crc kubenswrapper[5014]: > Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.273575 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.279631 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.309282 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-55f4b7c499-pw2kt"] Feb 28 04:38:51 crc kubenswrapper[5014]: E0228 04:38:51.309601 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72" containerName="route-controller-manager" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.309621 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72" containerName="route-controller-manager" Feb 28 04:38:51 crc kubenswrapper[5014]: E0228 04:38:51.309635 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="599d6a93-0b00-42e3-9dee-37a3888acf48" containerName="controller-manager" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.309643 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="599d6a93-0b00-42e3-9dee-37a3888acf48" containerName="controller-manager" Feb 28 04:38:51 crc kubenswrapper[5014]: E0228 04:38:51.309662 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1943af29-93f0-470e-85e8-4d53409329ae" containerName="oc" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.309672 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="1943af29-93f0-470e-85e8-4d53409329ae" containerName="oc" Feb 28 04:38:51 crc kubenswrapper[5014]: E0228 04:38:51.309691 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d84dec61-f4ef-4e0b-adb1-66694017a156" containerName="oc" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.309698 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="d84dec61-f4ef-4e0b-adb1-66694017a156" containerName="oc" Feb 28 04:38:51 crc kubenswrapper[5014]: E0228 04:38:51.309707 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe2969a7-da7d-4775-85cd-457fa5467c79" containerName="pruner" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.309715 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe2969a7-da7d-4775-85cd-457fa5467c79" containerName="pruner" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.309893 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="d84dec61-f4ef-4e0b-adb1-66694017a156" containerName="oc" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.309907 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe2969a7-da7d-4775-85cd-457fa5467c79" containerName="pruner" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.309922 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="599d6a93-0b00-42e3-9dee-37a3888acf48" containerName="controller-manager" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.309933 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72" containerName="route-controller-manager" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.309948 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="1943af29-93f0-470e-85e8-4d53409329ae" containerName="oc" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.311147 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.328090 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55f4b7c499-pw2kt"] Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.471917 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/599d6a93-0b00-42e3-9dee-37a3888acf48-serving-cert\") pod \"599d6a93-0b00-42e3-9dee-37a3888acf48\" (UID: \"599d6a93-0b00-42e3-9dee-37a3888acf48\") " Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.471975 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/599d6a93-0b00-42e3-9dee-37a3888acf48-config\") pod \"599d6a93-0b00-42e3-9dee-37a3888acf48\" (UID: \"599d6a93-0b00-42e3-9dee-37a3888acf48\") " Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.471994 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72-client-ca\") pod \"3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72\" (UID: \"3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72\") " Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.472012 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72-config\") pod \"3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72\" (UID: \"3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72\") " Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.472038 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/599d6a93-0b00-42e3-9dee-37a3888acf48-proxy-ca-bundles\") pod \"599d6a93-0b00-42e3-9dee-37a3888acf48\" (UID: \"599d6a93-0b00-42e3-9dee-37a3888acf48\") " Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.472074 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72-serving-cert\") pod \"3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72\" (UID: \"3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72\") " Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.472111 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2k5sz\" (UniqueName: \"kubernetes.io/projected/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72-kube-api-access-2k5sz\") pod \"3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72\" (UID: \"3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72\") " Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.472169 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59b2k\" (UniqueName: \"kubernetes.io/projected/599d6a93-0b00-42e3-9dee-37a3888acf48-kube-api-access-59b2k\") pod \"599d6a93-0b00-42e3-9dee-37a3888acf48\" (UID: \"599d6a93-0b00-42e3-9dee-37a3888acf48\") " Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.472189 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/599d6a93-0b00-42e3-9dee-37a3888acf48-client-ca\") pod \"599d6a93-0b00-42e3-9dee-37a3888acf48\" (UID: \"599d6a93-0b00-42e3-9dee-37a3888acf48\") " Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.472330 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zdnb\" (UniqueName: \"kubernetes.io/projected/f314ad52-83d6-47fd-931d-892a30cca689-kube-api-access-6zdnb\") pod \"controller-manager-55f4b7c499-pw2kt\" (UID: \"f314ad52-83d6-47fd-931d-892a30cca689\") " pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.472355 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f314ad52-83d6-47fd-931d-892a30cca689-serving-cert\") pod \"controller-manager-55f4b7c499-pw2kt\" (UID: \"f314ad52-83d6-47fd-931d-892a30cca689\") " pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.472390 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f314ad52-83d6-47fd-931d-892a30cca689-config\") pod \"controller-manager-55f4b7c499-pw2kt\" (UID: \"f314ad52-83d6-47fd-931d-892a30cca689\") " pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.472459 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f314ad52-83d6-47fd-931d-892a30cca689-client-ca\") pod \"controller-manager-55f4b7c499-pw2kt\" (UID: \"f314ad52-83d6-47fd-931d-892a30cca689\") " pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.472483 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f314ad52-83d6-47fd-931d-892a30cca689-proxy-ca-bundles\") pod \"controller-manager-55f4b7c499-pw2kt\" (UID: \"f314ad52-83d6-47fd-931d-892a30cca689\") " pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.473106 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72-config" (OuterVolumeSpecName: "config") pod "3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72" (UID: "3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.473138 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72-client-ca" (OuterVolumeSpecName: "client-ca") pod "3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72" (UID: "3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.473161 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/599d6a93-0b00-42e3-9dee-37a3888acf48-client-ca" (OuterVolumeSpecName: "client-ca") pod "599d6a93-0b00-42e3-9dee-37a3888acf48" (UID: "599d6a93-0b00-42e3-9dee-37a3888acf48"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.473703 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/599d6a93-0b00-42e3-9dee-37a3888acf48-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "599d6a93-0b00-42e3-9dee-37a3888acf48" (UID: "599d6a93-0b00-42e3-9dee-37a3888acf48"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.473739 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/599d6a93-0b00-42e3-9dee-37a3888acf48-config" (OuterVolumeSpecName: "config") pod "599d6a93-0b00-42e3-9dee-37a3888acf48" (UID: "599d6a93-0b00-42e3-9dee-37a3888acf48"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.481711 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/599d6a93-0b00-42e3-9dee-37a3888acf48-kube-api-access-59b2k" (OuterVolumeSpecName: "kube-api-access-59b2k") pod "599d6a93-0b00-42e3-9dee-37a3888acf48" (UID: "599d6a93-0b00-42e3-9dee-37a3888acf48"). InnerVolumeSpecName "kube-api-access-59b2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.481745 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/599d6a93-0b00-42e3-9dee-37a3888acf48-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "599d6a93-0b00-42e3-9dee-37a3888acf48" (UID: "599d6a93-0b00-42e3-9dee-37a3888acf48"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.482606 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72" (UID: "3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.487309 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72-kube-api-access-2k5sz" (OuterVolumeSpecName: "kube-api-access-2k5sz") pod "3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72" (UID: "3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72"). InnerVolumeSpecName "kube-api-access-2k5sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.573686 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zdnb\" (UniqueName: \"kubernetes.io/projected/f314ad52-83d6-47fd-931d-892a30cca689-kube-api-access-6zdnb\") pod \"controller-manager-55f4b7c499-pw2kt\" (UID: \"f314ad52-83d6-47fd-931d-892a30cca689\") " pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.573750 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f314ad52-83d6-47fd-931d-892a30cca689-serving-cert\") pod \"controller-manager-55f4b7c499-pw2kt\" (UID: \"f314ad52-83d6-47fd-931d-892a30cca689\") " pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.573793 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f314ad52-83d6-47fd-931d-892a30cca689-config\") pod \"controller-manager-55f4b7c499-pw2kt\" (UID: \"f314ad52-83d6-47fd-931d-892a30cca689\") " pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.573888 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f314ad52-83d6-47fd-931d-892a30cca689-client-ca\") pod \"controller-manager-55f4b7c499-pw2kt\" (UID: \"f314ad52-83d6-47fd-931d-892a30cca689\") " pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.573917 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f314ad52-83d6-47fd-931d-892a30cca689-proxy-ca-bundles\") pod \"controller-manager-55f4b7c499-pw2kt\" (UID: \"f314ad52-83d6-47fd-931d-892a30cca689\") " pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.573964 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2k5sz\" (UniqueName: \"kubernetes.io/projected/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72-kube-api-access-2k5sz\") on node \"crc\" DevicePath \"\"" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.573976 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59b2k\" (UniqueName: \"kubernetes.io/projected/599d6a93-0b00-42e3-9dee-37a3888acf48-kube-api-access-59b2k\") on node \"crc\" DevicePath \"\"" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.573985 5014 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/599d6a93-0b00-42e3-9dee-37a3888acf48-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.573993 5014 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/599d6a93-0b00-42e3-9dee-37a3888acf48-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.574001 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/599d6a93-0b00-42e3-9dee-37a3888acf48-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.574011 5014 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.574019 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.574027 5014 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/599d6a93-0b00-42e3-9dee-37a3888acf48-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.574035 5014 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.575431 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f314ad52-83d6-47fd-931d-892a30cca689-proxy-ca-bundles\") pod \"controller-manager-55f4b7c499-pw2kt\" (UID: \"f314ad52-83d6-47fd-931d-892a30cca689\") " pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.576490 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f314ad52-83d6-47fd-931d-892a30cca689-client-ca\") pod \"controller-manager-55f4b7c499-pw2kt\" (UID: \"f314ad52-83d6-47fd-931d-892a30cca689\") " pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.577876 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f314ad52-83d6-47fd-931d-892a30cca689-config\") pod \"controller-manager-55f4b7c499-pw2kt\" (UID: \"f314ad52-83d6-47fd-931d-892a30cca689\") " pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.578174 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f314ad52-83d6-47fd-931d-892a30cca689-serving-cert\") pod \"controller-manager-55f4b7c499-pw2kt\" (UID: \"f314ad52-83d6-47fd-931d-892a30cca689\") " pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.593000 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zdnb\" (UniqueName: \"kubernetes.io/projected/f314ad52-83d6-47fd-931d-892a30cca689-kube-api-access-6zdnb\") pod \"controller-manager-55f4b7c499-pw2kt\" (UID: \"f314ad52-83d6-47fd-931d-892a30cca689\") " pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" Feb 28 04:38:51 crc kubenswrapper[5014]: I0228 04:38:51.637598 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" Feb 28 04:38:52 crc kubenswrapper[5014]: I0228 04:38:52.047881 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" event={"ID":"3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72","Type":"ContainerDied","Data":"80df16194d7b85ce76371be160cbc77ffafe6f6f44096f2fb1232bcf9375914f"} Feb 28 04:38:52 crc kubenswrapper[5014]: I0228 04:38:52.047922 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l" Feb 28 04:38:52 crc kubenswrapper[5014]: I0228 04:38:52.047973 5014 scope.go:117] "RemoveContainer" containerID="93b29a1d9832cd1eb109b729786306ef964d4dcc0f2077f03511cd4c9ae2d904" Feb 28 04:38:52 crc kubenswrapper[5014]: I0228 04:38:52.051630 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" event={"ID":"599d6a93-0b00-42e3-9dee-37a3888acf48","Type":"ContainerDied","Data":"1c6c6199f17b57f9bede3185181cff746136d12d45ea8998d3b98f4ba56a655d"} Feb 28 04:38:52 crc kubenswrapper[5014]: I0228 04:38:52.051715 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5676c779b9-4hc57" Feb 28 04:38:52 crc kubenswrapper[5014]: I0228 04:38:52.114479 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5676c779b9-4hc57"] Feb 28 04:38:52 crc kubenswrapper[5014]: I0228 04:38:52.119799 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5676c779b9-4hc57"] Feb 28 04:38:52 crc kubenswrapper[5014]: I0228 04:38:52.127877 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l"] Feb 28 04:38:52 crc kubenswrapper[5014]: I0228 04:38:52.131597 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-866776bd7-58b5l"] Feb 28 04:38:52 crc kubenswrapper[5014]: I0228 04:38:52.178286 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72" path="/var/lib/kubelet/pods/3b91ec29-c4f0-4689-80d7-2d7d8a5f0f72/volumes" Feb 28 04:38:52 crc kubenswrapper[5014]: I0228 04:38:52.178916 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="599d6a93-0b00-42e3-9dee-37a3888acf48" path="/var/lib/kubelet/pods/599d6a93-0b00-42e3-9dee-37a3888acf48/volumes" Feb 28 04:38:53 crc kubenswrapper[5014]: I0228 04:38:53.290279 5014 scope.go:117] "RemoveContainer" containerID="70fa4e9f174d1e63431b5539de4ef8f9af45cb00517c2219093e66ef13723855" Feb 28 04:38:53 crc kubenswrapper[5014]: I0228 04:38:53.479850 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh"] Feb 28 04:38:53 crc kubenswrapper[5014]: I0228 04:38:53.481079 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" Feb 28 04:38:53 crc kubenswrapper[5014]: I0228 04:38:53.483040 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 28 04:38:53 crc kubenswrapper[5014]: I0228 04:38:53.484254 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 28 04:38:53 crc kubenswrapper[5014]: I0228 04:38:53.484939 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 28 04:38:53 crc kubenswrapper[5014]: I0228 04:38:53.487382 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 28 04:38:53 crc kubenswrapper[5014]: I0228 04:38:53.487650 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 28 04:38:53 crc kubenswrapper[5014]: I0228 04:38:53.488121 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 28 04:38:53 crc kubenswrapper[5014]: I0228 04:38:53.494925 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh"] Feb 28 04:38:53 crc kubenswrapper[5014]: I0228 04:38:53.631181 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js6ch\" (UniqueName: \"kubernetes.io/projected/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6-kube-api-access-js6ch\") pod \"route-controller-manager-855985cc94-t8kkh\" (UID: \"d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6\") " pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" Feb 28 04:38:53 crc kubenswrapper[5014]: I0228 04:38:53.631225 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6-config\") pod \"route-controller-manager-855985cc94-t8kkh\" (UID: \"d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6\") " pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" Feb 28 04:38:53 crc kubenswrapper[5014]: I0228 04:38:53.632097 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6-client-ca\") pod \"route-controller-manager-855985cc94-t8kkh\" (UID: \"d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6\") " pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" Feb 28 04:38:53 crc kubenswrapper[5014]: I0228 04:38:53.632135 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6-serving-cert\") pod \"route-controller-manager-855985cc94-t8kkh\" (UID: \"d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6\") " pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" Feb 28 04:38:53 crc kubenswrapper[5014]: I0228 04:38:53.733279 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js6ch\" (UniqueName: \"kubernetes.io/projected/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6-kube-api-access-js6ch\") pod \"route-controller-manager-855985cc94-t8kkh\" (UID: \"d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6\") " pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" Feb 28 04:38:53 crc kubenswrapper[5014]: I0228 04:38:53.733961 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6-config\") pod \"route-controller-manager-855985cc94-t8kkh\" (UID: \"d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6\") " pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" Feb 28 04:38:53 crc kubenswrapper[5014]: I0228 04:38:53.734073 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6-client-ca\") pod \"route-controller-manager-855985cc94-t8kkh\" (UID: \"d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6\") " pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" Feb 28 04:38:53 crc kubenswrapper[5014]: I0228 04:38:53.735886 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6-serving-cert\") pod \"route-controller-manager-855985cc94-t8kkh\" (UID: \"d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6\") " pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" Feb 28 04:38:53 crc kubenswrapper[5014]: I0228 04:38:53.736325 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6-config\") pod \"route-controller-manager-855985cc94-t8kkh\" (UID: \"d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6\") " pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" Feb 28 04:38:53 crc kubenswrapper[5014]: I0228 04:38:53.736331 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6-client-ca\") pod \"route-controller-manager-855985cc94-t8kkh\" (UID: \"d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6\") " pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" Feb 28 04:38:53 crc kubenswrapper[5014]: I0228 04:38:53.742559 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-55f4b7c499-pw2kt"] Feb 28 04:38:53 crc kubenswrapper[5014]: I0228 04:38:53.750068 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6-serving-cert\") pod \"route-controller-manager-855985cc94-t8kkh\" (UID: \"d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6\") " pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" Feb 28 04:38:53 crc kubenswrapper[5014]: W0228 04:38:53.751521 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf314ad52_83d6_47fd_931d_892a30cca689.slice/crio-8b394d77c55a311a6bd59d64de892a5c71f25ce7def00f983d35ce49fd924a84 WatchSource:0}: Error finding container 8b394d77c55a311a6bd59d64de892a5c71f25ce7def00f983d35ce49fd924a84: Status 404 returned error can't find the container with id 8b394d77c55a311a6bd59d64de892a5c71f25ce7def00f983d35ce49fd924a84 Feb 28 04:38:53 crc kubenswrapper[5014]: I0228 04:38:53.751891 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js6ch\" (UniqueName: \"kubernetes.io/projected/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6-kube-api-access-js6ch\") pod \"route-controller-manager-855985cc94-t8kkh\" (UID: \"d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6\") " pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" Feb 28 04:38:53 crc kubenswrapper[5014]: I0228 04:38:53.932031 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" Feb 28 04:38:54 crc kubenswrapper[5014]: I0228 04:38:54.075603 5014 generic.go:334] "Generic (PLEG): container finished" podID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" containerID="c591e2f67ebe687cbde7f79f26f7a1406e65e81028d8cea464f4c9e79fc553e7" exitCode=0 Feb 28 04:38:54 crc kubenswrapper[5014]: I0228 04:38:54.075769 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kx627" event={"ID":"bd99ec6a-5237-42f9-81ad-bd813d262c6d","Type":"ContainerDied","Data":"c591e2f67ebe687cbde7f79f26f7a1406e65e81028d8cea464f4c9e79fc553e7"} Feb 28 04:38:54 crc kubenswrapper[5014]: I0228 04:38:54.083059 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5h8g" event={"ID":"7bdb5d29-5a4c-4358-a276-58efd08a8655","Type":"ContainerStarted","Data":"c0cc124af7da90090bb62569c5a149cbecfaceeadf8e6ff8397672a358fc9852"} Feb 28 04:38:54 crc kubenswrapper[5014]: I0228 04:38:54.090590 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zq82" event={"ID":"bba9702f-9e04-46d4-9a98-92d5303383c4","Type":"ContainerStarted","Data":"efba7c4f0f824ce32d4eeb841869734789a5f143768940962c467dfd5ca7984a"} Feb 28 04:38:54 crc kubenswrapper[5014]: I0228 04:38:54.099644 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9cznf" event={"ID":"52079806-fc0c-4852-8150-0123d376c1b2","Type":"ContainerStarted","Data":"3bd97587235e7e11d8b5a8594f80bb2b49ffc96e41504ac06bdb983c8ce07d1d"} Feb 28 04:38:54 crc kubenswrapper[5014]: I0228 04:38:54.111106 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npdf6" event={"ID":"8a00f74f-e858-42cc-b882-492afd45684d","Type":"ContainerStarted","Data":"4351bfe2acee3deca8041e42244892f1d8d53660d71f470076df4c370278e406"} Feb 28 04:38:54 crc kubenswrapper[5014]: I0228 04:38:54.117039 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqfvs" event={"ID":"50cf3400-fb73-4038-b616-2d3559aaf784","Type":"ContainerStarted","Data":"29f2dfffbf0470555eb7b9ebf16d9b06b5ecfe15b1a7e425c9b02dc66dce62ed"} Feb 28 04:38:54 crc kubenswrapper[5014]: I0228 04:38:54.132027 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" event={"ID":"f314ad52-83d6-47fd-931d-892a30cca689","Type":"ContainerStarted","Data":"ed219ae34e580b0cbabf3c1f7bff3799e73016b8a423fd24dea329c390507e00"} Feb 28 04:38:54 crc kubenswrapper[5014]: I0228 04:38:54.132087 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" event={"ID":"f314ad52-83d6-47fd-931d-892a30cca689","Type":"ContainerStarted","Data":"8b394d77c55a311a6bd59d64de892a5c71f25ce7def00f983d35ce49fd924a84"} Feb 28 04:38:54 crc kubenswrapper[5014]: I0228 04:38:54.132794 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" Feb 28 04:38:54 crc kubenswrapper[5014]: I0228 04:38:54.134908 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r5h8g" podStartSLOduration=3.6680281519999998 podStartE2EDuration="1m29.134875444s" podCreationTimestamp="2026-02-28 04:37:25 +0000 UTC" firstStartedPulling="2026-02-28 04:37:27.934934698 +0000 UTC m=+236.605060608" lastFinishedPulling="2026-02-28 04:38:53.40178197 +0000 UTC m=+322.071907900" observedRunningTime="2026-02-28 04:38:54.130918568 +0000 UTC m=+322.801044478" watchObservedRunningTime="2026-02-28 04:38:54.134875444 +0000 UTC m=+322.805001364" Feb 28 04:38:54 crc kubenswrapper[5014]: I0228 04:38:54.148956 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" Feb 28 04:38:54 crc kubenswrapper[5014]: I0228 04:38:54.149018 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hx7qb" event={"ID":"3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4","Type":"ContainerStarted","Data":"de7f0fee352c5ca0fdc4a3b43e19a807f1a9b4d221f37fc480852272ee20eb91"} Feb 28 04:38:54 crc kubenswrapper[5014]: I0228 04:38:54.183114 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" podStartSLOduration=9.183091495 podStartE2EDuration="9.183091495s" podCreationTimestamp="2026-02-28 04:38:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:38:54.180251842 +0000 UTC m=+322.850377752" watchObservedRunningTime="2026-02-28 04:38:54.183091495 +0000 UTC m=+322.853217405" Feb 28 04:38:54 crc kubenswrapper[5014]: I0228 04:38:54.253141 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-npdf6" podStartSLOduration=6.03562747 podStartE2EDuration="1m27.25311706s" podCreationTimestamp="2026-02-28 04:37:27 +0000 UTC" firstStartedPulling="2026-02-28 04:37:29.113228805 +0000 UTC m=+237.783354705" lastFinishedPulling="2026-02-28 04:38:50.330718385 +0000 UTC m=+319.000844295" observedRunningTime="2026-02-28 04:38:54.227356021 +0000 UTC m=+322.897481931" watchObservedRunningTime="2026-02-28 04:38:54.25311706 +0000 UTC m=+322.923242970" Feb 28 04:38:54 crc kubenswrapper[5014]: I0228 04:38:54.387908 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh"] Feb 28 04:38:54 crc kubenswrapper[5014]: E0228 04:38:54.584294 5014 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbba9702f_9e04_46d4_9a98_92d5303383c4.slice/crio-efba7c4f0f824ce32d4eeb841869734789a5f143768940962c467dfd5ca7984a.scope\": RecentStats: unable to find data in memory cache]" Feb 28 04:38:55 crc kubenswrapper[5014]: I0228 04:38:55.151296 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kx627" event={"ID":"bd99ec6a-5237-42f9-81ad-bd813d262c6d","Type":"ContainerStarted","Data":"aa0668cb3d249afac3762bcf32c8f92bde7a8a061c5114c229821ab7ce4beb62"} Feb 28 04:38:55 crc kubenswrapper[5014]: I0228 04:38:55.152738 5014 generic.go:334] "Generic (PLEG): container finished" podID="3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4" containerID="de7f0fee352c5ca0fdc4a3b43e19a807f1a9b4d221f37fc480852272ee20eb91" exitCode=0 Feb 28 04:38:55 crc kubenswrapper[5014]: I0228 04:38:55.152789 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hx7qb" event={"ID":"3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4","Type":"ContainerDied","Data":"de7f0fee352c5ca0fdc4a3b43e19a807f1a9b4d221f37fc480852272ee20eb91"} Feb 28 04:38:55 crc kubenswrapper[5014]: I0228 04:38:55.154683 5014 generic.go:334] "Generic (PLEG): container finished" podID="bba9702f-9e04-46d4-9a98-92d5303383c4" containerID="efba7c4f0f824ce32d4eeb841869734789a5f143768940962c467dfd5ca7984a" exitCode=0 Feb 28 04:38:55 crc kubenswrapper[5014]: I0228 04:38:55.154733 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zq82" event={"ID":"bba9702f-9e04-46d4-9a98-92d5303383c4","Type":"ContainerDied","Data":"efba7c4f0f824ce32d4eeb841869734789a5f143768940962c467dfd5ca7984a"} Feb 28 04:38:55 crc kubenswrapper[5014]: I0228 04:38:55.156695 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" event={"ID":"d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6","Type":"ContainerStarted","Data":"33ae25eb6000da63aba2bb496d04650dea6d8c24f6219bb35a18a80f94d982b4"} Feb 28 04:38:55 crc kubenswrapper[5014]: I0228 04:38:55.156745 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" event={"ID":"d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6","Type":"ContainerStarted","Data":"597ac564228c35a6d7d94d1eab12f1acab68b6099ab83ace3b2cecbc68db482e"} Feb 28 04:38:55 crc kubenswrapper[5014]: I0228 04:38:55.156900 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" Feb 28 04:38:55 crc kubenswrapper[5014]: I0228 04:38:55.161635 5014 generic.go:334] "Generic (PLEG): container finished" podID="52079806-fc0c-4852-8150-0123d376c1b2" containerID="3bd97587235e7e11d8b5a8594f80bb2b49ffc96e41504ac06bdb983c8ce07d1d" exitCode=0 Feb 28 04:38:55 crc kubenswrapper[5014]: I0228 04:38:55.161712 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9cznf" event={"ID":"52079806-fc0c-4852-8150-0123d376c1b2","Type":"ContainerDied","Data":"3bd97587235e7e11d8b5a8594f80bb2b49ffc96e41504ac06bdb983c8ce07d1d"} Feb 28 04:38:55 crc kubenswrapper[5014]: I0228 04:38:55.164655 5014 generic.go:334] "Generic (PLEG): container finished" podID="50cf3400-fb73-4038-b616-2d3559aaf784" containerID="29f2dfffbf0470555eb7b9ebf16d9b06b5ecfe15b1a7e425c9b02dc66dce62ed" exitCode=0 Feb 28 04:38:55 crc kubenswrapper[5014]: I0228 04:38:55.164717 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqfvs" event={"ID":"50cf3400-fb73-4038-b616-2d3559aaf784","Type":"ContainerDied","Data":"29f2dfffbf0470555eb7b9ebf16d9b06b5ecfe15b1a7e425c9b02dc66dce62ed"} Feb 28 04:38:55 crc kubenswrapper[5014]: I0228 04:38:55.164746 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqfvs" event={"ID":"50cf3400-fb73-4038-b616-2d3559aaf784","Type":"ContainerStarted","Data":"27cbe99ab4658bfe6b52aac789ba02457379a32f74bf13136730c5b0c69a0f4e"} Feb 28 04:38:55 crc kubenswrapper[5014]: I0228 04:38:55.172513 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" Feb 28 04:38:55 crc kubenswrapper[5014]: I0228 04:38:55.176868 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kx627" podStartSLOduration=3.640917791 podStartE2EDuration="1m30.176849605s" podCreationTimestamp="2026-02-28 04:37:25 +0000 UTC" firstStartedPulling="2026-02-28 04:37:27.98997968 +0000 UTC m=+236.660105590" lastFinishedPulling="2026-02-28 04:38:54.525911494 +0000 UTC m=+323.196037404" observedRunningTime="2026-02-28 04:38:55.174941178 +0000 UTC m=+323.845067088" watchObservedRunningTime="2026-02-28 04:38:55.176849605 +0000 UTC m=+323.846975515" Feb 28 04:38:55 crc kubenswrapper[5014]: I0228 04:38:55.217742 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" podStartSLOduration=10.21772239 podStartE2EDuration="10.21772239s" podCreationTimestamp="2026-02-28 04:38:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:38:55.214150284 +0000 UTC m=+323.884276194" watchObservedRunningTime="2026-02-28 04:38:55.21772239 +0000 UTC m=+323.887848300" Feb 28 04:38:55 crc kubenswrapper[5014]: I0228 04:38:55.301319 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sqfvs" podStartSLOduration=3.656598817 podStartE2EDuration="1m30.301297755s" podCreationTimestamp="2026-02-28 04:37:25 +0000 UTC" firstStartedPulling="2026-02-28 04:37:27.942027358 +0000 UTC m=+236.612153268" lastFinishedPulling="2026-02-28 04:38:54.586726296 +0000 UTC m=+323.256852206" observedRunningTime="2026-02-28 04:38:55.274157824 +0000 UTC m=+323.944283734" watchObservedRunningTime="2026-02-28 04:38:55.301297755 +0000 UTC m=+323.971423665" Feb 28 04:38:55 crc kubenswrapper[5014]: I0228 04:38:55.686472 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sqfvs" Feb 28 04:38:55 crc kubenswrapper[5014]: I0228 04:38:55.686693 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sqfvs" Feb 28 04:38:56 crc kubenswrapper[5014]: I0228 04:38:56.137194 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kx627" Feb 28 04:38:56 crc kubenswrapper[5014]: I0228 04:38:56.137247 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kx627" Feb 28 04:38:56 crc kubenswrapper[5014]: I0228 04:38:56.178108 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r5h8g" Feb 28 04:38:56 crc kubenswrapper[5014]: I0228 04:38:56.178141 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r5h8g" Feb 28 04:38:56 crc kubenswrapper[5014]: I0228 04:38:56.179020 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hx7qb" event={"ID":"3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4","Type":"ContainerStarted","Data":"7b31560e4b1aeb695a44ad469fe6f499752151480fcabdf3ec5b0d5247168adc"} Feb 28 04:38:56 crc kubenswrapper[5014]: I0228 04:38:56.181186 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zq82" event={"ID":"bba9702f-9e04-46d4-9a98-92d5303383c4","Type":"ContainerStarted","Data":"8db62d0137fa23ba071b5293ea6547d3f10ce3906d4420ceae3adde607ddace5"} Feb 28 04:38:56 crc kubenswrapper[5014]: I0228 04:38:56.184587 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9cznf" event={"ID":"52079806-fc0c-4852-8150-0123d376c1b2","Type":"ContainerStarted","Data":"d581abc1b7c171ea12adfd3289725c060f4ce47ee40d93f232591cc0e173df7a"} Feb 28 04:38:56 crc kubenswrapper[5014]: I0228 04:38:56.201064 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hx7qb" podStartSLOduration=3.995394857 podStartE2EDuration="1m28.201041462s" podCreationTimestamp="2026-02-28 04:37:28 +0000 UTC" firstStartedPulling="2026-02-28 04:37:31.352120771 +0000 UTC m=+240.022246681" lastFinishedPulling="2026-02-28 04:38:55.557767376 +0000 UTC m=+324.227893286" observedRunningTime="2026-02-28 04:38:56.197966281 +0000 UTC m=+324.868092191" watchObservedRunningTime="2026-02-28 04:38:56.201041462 +0000 UTC m=+324.871167372" Feb 28 04:38:56 crc kubenswrapper[5014]: I0228 04:38:56.218692 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5zq82" podStartSLOduration=3.943471352 podStartE2EDuration="1m28.218673882s" podCreationTimestamp="2026-02-28 04:37:28 +0000 UTC" firstStartedPulling="2026-02-28 04:37:31.301584997 +0000 UTC m=+239.971710907" lastFinishedPulling="2026-02-28 04:38:55.576787527 +0000 UTC m=+324.246913437" observedRunningTime="2026-02-28 04:38:56.217111155 +0000 UTC m=+324.887237075" watchObservedRunningTime="2026-02-28 04:38:56.218673882 +0000 UTC m=+324.888799792" Feb 28 04:38:56 crc kubenswrapper[5014]: I0228 04:38:56.225552 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r5h8g" Feb 28 04:38:56 crc kubenswrapper[5014]: I0228 04:38:56.239269 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9cznf" podStartSLOduration=2.49487341 podStartE2EDuration="1m31.239245538s" podCreationTimestamp="2026-02-28 04:37:25 +0000 UTC" firstStartedPulling="2026-02-28 04:37:26.787996719 +0000 UTC m=+235.458122629" lastFinishedPulling="2026-02-28 04:38:55.532368847 +0000 UTC m=+324.202494757" observedRunningTime="2026-02-28 04:38:56.237056704 +0000 UTC m=+324.907182614" watchObservedRunningTime="2026-02-28 04:38:56.239245538 +0000 UTC m=+324.909371468" Feb 28 04:38:56 crc kubenswrapper[5014]: I0228 04:38:56.733944 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-sqfvs" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" containerName="registry-server" probeResult="failure" output=< Feb 28 04:38:56 crc kubenswrapper[5014]: timeout: failed to connect service ":50051" within 1s Feb 28 04:38:56 crc kubenswrapper[5014]: > Feb 28 04:38:57 crc kubenswrapper[5014]: I0228 04:38:57.172556 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-kx627" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" containerName="registry-server" probeResult="failure" output=< Feb 28 04:38:57 crc kubenswrapper[5014]: timeout: failed to connect service ":50051" within 1s Feb 28 04:38:57 crc kubenswrapper[5014]: > Feb 28 04:38:57 crc kubenswrapper[5014]: I0228 04:38:57.720081 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-npdf6" Feb 28 04:38:57 crc kubenswrapper[5014]: I0228 04:38:57.720628 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-npdf6" Feb 28 04:38:57 crc kubenswrapper[5014]: I0228 04:38:57.772407 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-npdf6" Feb 28 04:38:57 crc kubenswrapper[5014]: I0228 04:38:57.789610 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fkqnd"] Feb 28 04:38:58 crc kubenswrapper[5014]: I0228 04:38:58.179579 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-64ktb" Feb 28 04:38:58 crc kubenswrapper[5014]: I0228 04:38:58.228746 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-64ktb" Feb 28 04:38:58 crc kubenswrapper[5014]: I0228 04:38:58.250478 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-npdf6" Feb 28 04:38:58 crc kubenswrapper[5014]: I0228 04:38:58.671385 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5zq82" Feb 28 04:38:58 crc kubenswrapper[5014]: I0228 04:38:58.671873 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5zq82" Feb 28 04:38:59 crc kubenswrapper[5014]: I0228 04:38:59.075517 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hx7qb" Feb 28 04:38:59 crc kubenswrapper[5014]: I0228 04:38:59.077647 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hx7qb" Feb 28 04:38:59 crc kubenswrapper[5014]: I0228 04:38:59.729137 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5zq82" podUID="bba9702f-9e04-46d4-9a98-92d5303383c4" containerName="registry-server" probeResult="failure" output=< Feb 28 04:38:59 crc kubenswrapper[5014]: timeout: failed to connect service ":50051" within 1s Feb 28 04:38:59 crc kubenswrapper[5014]: > Feb 28 04:39:00 crc kubenswrapper[5014]: I0228 04:39:00.131599 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hx7qb" podUID="3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4" containerName="registry-server" probeResult="failure" output=< Feb 28 04:39:00 crc kubenswrapper[5014]: timeout: failed to connect service ":50051" within 1s Feb 28 04:39:00 crc kubenswrapper[5014]: > Feb 28 04:39:00 crc kubenswrapper[5014]: I0228 04:39:00.510213 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-64ktb"] Feb 28 04:39:00 crc kubenswrapper[5014]: I0228 04:39:00.510463 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-64ktb" podUID="d19bc223-12d6-45a9-87de-31ec3b6d9557" containerName="registry-server" containerID="cri-o://5bc192e19a4c6a2f99a2bf686132fe8881814038a7ce7a2333f9d36839466dab" gracePeriod=2 Feb 28 04:39:02 crc kubenswrapper[5014]: I0228 04:39:02.239084 5014 generic.go:334] "Generic (PLEG): container finished" podID="d19bc223-12d6-45a9-87de-31ec3b6d9557" containerID="5bc192e19a4c6a2f99a2bf686132fe8881814038a7ce7a2333f9d36839466dab" exitCode=0 Feb 28 04:39:02 crc kubenswrapper[5014]: I0228 04:39:02.239295 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64ktb" event={"ID":"d19bc223-12d6-45a9-87de-31ec3b6d9557","Type":"ContainerDied","Data":"5bc192e19a4c6a2f99a2bf686132fe8881814038a7ce7a2333f9d36839466dab"} Feb 28 04:39:02 crc kubenswrapper[5014]: I0228 04:39:02.548151 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-64ktb" Feb 28 04:39:02 crc kubenswrapper[5014]: I0228 04:39:02.578143 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d19bc223-12d6-45a9-87de-31ec3b6d9557-catalog-content\") pod \"d19bc223-12d6-45a9-87de-31ec3b6d9557\" (UID: \"d19bc223-12d6-45a9-87de-31ec3b6d9557\") " Feb 28 04:39:02 crc kubenswrapper[5014]: I0228 04:39:02.578270 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kz2l\" (UniqueName: \"kubernetes.io/projected/d19bc223-12d6-45a9-87de-31ec3b6d9557-kube-api-access-5kz2l\") pod \"d19bc223-12d6-45a9-87de-31ec3b6d9557\" (UID: \"d19bc223-12d6-45a9-87de-31ec3b6d9557\") " Feb 28 04:39:02 crc kubenswrapper[5014]: I0228 04:39:02.578337 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d19bc223-12d6-45a9-87de-31ec3b6d9557-utilities\") pod \"d19bc223-12d6-45a9-87de-31ec3b6d9557\" (UID: \"d19bc223-12d6-45a9-87de-31ec3b6d9557\") " Feb 28 04:39:02 crc kubenswrapper[5014]: I0228 04:39:02.579657 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d19bc223-12d6-45a9-87de-31ec3b6d9557-utilities" (OuterVolumeSpecName: "utilities") pod "d19bc223-12d6-45a9-87de-31ec3b6d9557" (UID: "d19bc223-12d6-45a9-87de-31ec3b6d9557"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:39:02 crc kubenswrapper[5014]: I0228 04:39:02.586126 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19bc223-12d6-45a9-87de-31ec3b6d9557-kube-api-access-5kz2l" (OuterVolumeSpecName: "kube-api-access-5kz2l") pod "d19bc223-12d6-45a9-87de-31ec3b6d9557" (UID: "d19bc223-12d6-45a9-87de-31ec3b6d9557"). InnerVolumeSpecName "kube-api-access-5kz2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:39:02 crc kubenswrapper[5014]: I0228 04:39:02.607689 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d19bc223-12d6-45a9-87de-31ec3b6d9557-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d19bc223-12d6-45a9-87de-31ec3b6d9557" (UID: "d19bc223-12d6-45a9-87de-31ec3b6d9557"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:39:02 crc kubenswrapper[5014]: I0228 04:39:02.679766 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d19bc223-12d6-45a9-87de-31ec3b6d9557-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:02 crc kubenswrapper[5014]: I0228 04:39:02.679794 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kz2l\" (UniqueName: \"kubernetes.io/projected/d19bc223-12d6-45a9-87de-31ec3b6d9557-kube-api-access-5kz2l\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:02 crc kubenswrapper[5014]: I0228 04:39:02.679826 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d19bc223-12d6-45a9-87de-31ec3b6d9557-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:03 crc kubenswrapper[5014]: I0228 04:39:03.250839 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-64ktb" event={"ID":"d19bc223-12d6-45a9-87de-31ec3b6d9557","Type":"ContainerDied","Data":"e5e5e684aa05b3ad8ec2cdb411d01943990413cbcc0f061f1bc1e7656e4f53cd"} Feb 28 04:39:03 crc kubenswrapper[5014]: I0228 04:39:03.250933 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-64ktb" Feb 28 04:39:03 crc kubenswrapper[5014]: I0228 04:39:03.251508 5014 scope.go:117] "RemoveContainer" containerID="5bc192e19a4c6a2f99a2bf686132fe8881814038a7ce7a2333f9d36839466dab" Feb 28 04:39:03 crc kubenswrapper[5014]: I0228 04:39:03.284988 5014 scope.go:117] "RemoveContainer" containerID="5b7e08a50df10538a739eca6fd667f4f2c634b8b4a4f22b7ed6b3230c1cb145d" Feb 28 04:39:03 crc kubenswrapper[5014]: I0228 04:39:03.305406 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-64ktb"] Feb 28 04:39:03 crc kubenswrapper[5014]: I0228 04:39:03.311886 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-64ktb"] Feb 28 04:39:03 crc kubenswrapper[5014]: I0228 04:39:03.332429 5014 scope.go:117] "RemoveContainer" containerID="c41e25e52b50b03c3a060af4363a93cb1b4e95573b3ffe1924132c052ecc75d7" Feb 28 04:39:04 crc kubenswrapper[5014]: I0228 04:39:04.190576 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19bc223-12d6-45a9-87de-31ec3b6d9557" path="/var/lib/kubelet/pods/d19bc223-12d6-45a9-87de-31ec3b6d9557/volumes" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.081630 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-55f4b7c499-pw2kt"] Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.081970 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" podUID="f314ad52-83d6-47fd-931d-892a30cca689" containerName="controller-manager" containerID="cri-o://ed219ae34e580b0cbabf3c1f7bff3799e73016b8a423fd24dea329c390507e00" gracePeriod=30 Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.121923 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh"] Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.122191 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" podUID="d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" containerName="route-controller-manager" containerID="cri-o://33ae25eb6000da63aba2bb496d04650dea6d8c24f6219bb35a18a80f94d982b4" gracePeriod=30 Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.265647 5014 generic.go:334] "Generic (PLEG): container finished" podID="f314ad52-83d6-47fd-931d-892a30cca689" containerID="ed219ae34e580b0cbabf3c1f7bff3799e73016b8a423fd24dea329c390507e00" exitCode=0 Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.265687 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" event={"ID":"f314ad52-83d6-47fd-931d-892a30cca689","Type":"ContainerDied","Data":"ed219ae34e580b0cbabf3c1f7bff3799e73016b8a423fd24dea329c390507e00"} Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.267627 5014 generic.go:334] "Generic (PLEG): container finished" podID="d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" containerID="33ae25eb6000da63aba2bb496d04650dea6d8c24f6219bb35a18a80f94d982b4" exitCode=0 Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.267664 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" event={"ID":"d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6","Type":"ContainerDied","Data":"33ae25eb6000da63aba2bb496d04650dea6d8c24f6219bb35a18a80f94d982b4"} Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.466450 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9cznf" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.466518 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9cznf" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.516101 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9cznf" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.623714 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.663462 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.716794 5014 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.717110 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e" gracePeriod=15 Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.717296 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://719151f8073cf997e9ca1087dcbb47e8f83f0e25a700251737c5a5d4b39bfde4" gracePeriod=15 Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.717352 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea" gracePeriod=15 Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.717386 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0" gracePeriod=15 Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.717417 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b" gracePeriod=15 Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.718565 5014 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 28 04:39:05 crc kubenswrapper[5014]: E0228 04:39:05.718738 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f314ad52-83d6-47fd-931d-892a30cca689" containerName="controller-manager" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.718753 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f314ad52-83d6-47fd-931d-892a30cca689" containerName="controller-manager" Feb 28 04:39:05 crc kubenswrapper[5014]: E0228 04:39:05.718762 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.718769 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 28 04:39:05 crc kubenswrapper[5014]: E0228 04:39:05.718777 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.718783 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 04:39:05 crc kubenswrapper[5014]: E0228 04:39:05.718790 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.718796 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 04:39:05 crc kubenswrapper[5014]: E0228 04:39:05.718821 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.718827 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 28 04:39:05 crc kubenswrapper[5014]: E0228 04:39:05.718838 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d19bc223-12d6-45a9-87de-31ec3b6d9557" containerName="registry-server" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.718844 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="d19bc223-12d6-45a9-87de-31ec3b6d9557" containerName="registry-server" Feb 28 04:39:05 crc kubenswrapper[5014]: E0228 04:39:05.718857 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d19bc223-12d6-45a9-87de-31ec3b6d9557" containerName="extract-content" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.718862 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="d19bc223-12d6-45a9-87de-31ec3b6d9557" containerName="extract-content" Feb 28 04:39:05 crc kubenswrapper[5014]: E0228 04:39:05.718868 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.718874 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 28 04:39:05 crc kubenswrapper[5014]: E0228 04:39:05.718882 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.718890 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 28 04:39:05 crc kubenswrapper[5014]: E0228 04:39:05.718899 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d19bc223-12d6-45a9-87de-31ec3b6d9557" containerName="extract-utilities" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.718905 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="d19bc223-12d6-45a9-87de-31ec3b6d9557" containerName="extract-utilities" Feb 28 04:39:05 crc kubenswrapper[5014]: E0228 04:39:05.718914 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.718920 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 28 04:39:05 crc kubenswrapper[5014]: E0228 04:39:05.718931 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" containerName="route-controller-manager" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.718937 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" containerName="route-controller-manager" Feb 28 04:39:05 crc kubenswrapper[5014]: E0228 04:39:05.718943 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.718948 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.719045 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.719056 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="f314ad52-83d6-47fd-931d-892a30cca689" containerName="controller-manager" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.719065 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" containerName="route-controller-manager" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.719072 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.719080 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.719086 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.719094 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.719099 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.719105 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.719112 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="d19bc223-12d6-45a9-87de-31ec3b6d9557" containerName="registry-server" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.719118 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 04:39:05 crc kubenswrapper[5014]: E0228 04:39:05.719211 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.719218 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 04:39:05 crc kubenswrapper[5014]: E0228 04:39:05.719227 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.719233 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.719349 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.720477 5014 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.720960 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.724772 5014 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.726251 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zdnb\" (UniqueName: \"kubernetes.io/projected/f314ad52-83d6-47fd-931d-892a30cca689-kube-api-access-6zdnb\") pod \"f314ad52-83d6-47fd-931d-892a30cca689\" (UID: \"f314ad52-83d6-47fd-931d-892a30cca689\") " Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.726285 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js6ch\" (UniqueName: \"kubernetes.io/projected/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6-kube-api-access-js6ch\") pod \"d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6\" (UID: \"d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6\") " Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.726330 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f314ad52-83d6-47fd-931d-892a30cca689-client-ca\") pod \"f314ad52-83d6-47fd-931d-892a30cca689\" (UID: \"f314ad52-83d6-47fd-931d-892a30cca689\") " Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.726352 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f314ad52-83d6-47fd-931d-892a30cca689-proxy-ca-bundles\") pod \"f314ad52-83d6-47fd-931d-892a30cca689\" (UID: \"f314ad52-83d6-47fd-931d-892a30cca689\") " Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.726415 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6-config\") pod \"d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6\" (UID: \"d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6\") " Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.726448 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6-serving-cert\") pod \"d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6\" (UID: \"d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6\") " Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.726467 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f314ad52-83d6-47fd-931d-892a30cca689-serving-cert\") pod \"f314ad52-83d6-47fd-931d-892a30cca689\" (UID: \"f314ad52-83d6-47fd-931d-892a30cca689\") " Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.726482 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f314ad52-83d6-47fd-931d-892a30cca689-config\") pod \"f314ad52-83d6-47fd-931d-892a30cca689\" (UID: \"f314ad52-83d6-47fd-931d-892a30cca689\") " Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.727307 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6-config" (OuterVolumeSpecName: "config") pod "d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" (UID: "d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.727363 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6-client-ca\") pod \"d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6\" (UID: \"d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6\") " Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.727891 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6-client-ca" (OuterVolumeSpecName: "client-ca") pod "d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" (UID: "d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.728008 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f314ad52-83d6-47fd-931d-892a30cca689-config" (OuterVolumeSpecName: "config") pod "f314ad52-83d6-47fd-931d-892a30cca689" (UID: "f314ad52-83d6-47fd-931d-892a30cca689"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.728088 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f314ad52-83d6-47fd-931d-892a30cca689-client-ca" (OuterVolumeSpecName: "client-ca") pod "f314ad52-83d6-47fd-931d-892a30cca689" (UID: "f314ad52-83d6-47fd-931d-892a30cca689"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.728220 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f314ad52-83d6-47fd-931d-892a30cca689-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.728220 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f314ad52-83d6-47fd-931d-892a30cca689-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f314ad52-83d6-47fd-931d-892a30cca689" (UID: "f314ad52-83d6-47fd-931d-892a30cca689"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.728259 5014 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.728284 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.734261 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f314ad52-83d6-47fd-931d-892a30cca689-kube-api-access-6zdnb" (OuterVolumeSpecName: "kube-api-access-6zdnb") pod "f314ad52-83d6-47fd-931d-892a30cca689" (UID: "f314ad52-83d6-47fd-931d-892a30cca689"). InnerVolumeSpecName "kube-api-access-6zdnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.734260 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" (UID: "d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.734842 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6-kube-api-access-js6ch" (OuterVolumeSpecName: "kube-api-access-js6ch") pod "d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" (UID: "d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6"). InnerVolumeSpecName "kube-api-access-js6ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.735333 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f314ad52-83d6-47fd-931d-892a30cca689-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f314ad52-83d6-47fd-931d-892a30cca689" (UID: "f314ad52-83d6-47fd-931d-892a30cca689"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.794338 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.797202 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sqfvs" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.798243 5014 status_manager.go:851] "Failed to get status for pod" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" pod="openshift-marketplace/certified-operators-sqfvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sqfvs\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.798434 5014 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.829528 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.829596 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.829659 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.829677 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.829723 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.829745 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.829780 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.829821 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.829859 5014 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.829870 5014 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f314ad52-83d6-47fd-931d-892a30cca689-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.829879 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6zdnb\" (UniqueName: \"kubernetes.io/projected/f314ad52-83d6-47fd-931d-892a30cca689-kube-api-access-6zdnb\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.829893 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js6ch\" (UniqueName: \"kubernetes.io/projected/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6-kube-api-access-js6ch\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.829901 5014 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f314ad52-83d6-47fd-931d-892a30cca689-client-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.829909 5014 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f314ad52-83d6-47fd-931d-892a30cca689-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.844549 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sqfvs" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.845255 5014 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.845498 5014 status_manager.go:851] "Failed to get status for pod" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" pod="openshift-marketplace/certified-operators-sqfvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sqfvs\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.935599 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.935652 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.935682 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.935717 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.935751 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.935738 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.935773 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.935854 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.935855 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.935855 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.935899 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.935909 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.936000 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.936056 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.936199 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:39:05 crc kubenswrapper[5014]: I0228 04:39:05.936236 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.060048 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 04:39:06 crc kubenswrapper[5014]: W0228 04:39:06.089170 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-c3dbe270c6bb92b849ea0bdf97b191a9ab418d8692769f0fe2205642664568e0 WatchSource:0}: Error finding container c3dbe270c6bb92b849ea0bdf97b191a9ab418d8692769f0fe2205642664568e0: Status 404 returned error can't find the container with id c3dbe270c6bb92b849ea0bdf97b191a9ab418d8692769f0fe2205642664568e0 Feb 28 04:39:06 crc kubenswrapper[5014]: E0228 04:39:06.094109 5014 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18984f35dad090fd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:39:06.093252861 +0000 UTC m=+334.763378781,LastTimestamp:2026-02-28 04:39:06.093252861 +0000 UTC m=+334.763378781,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.221293 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kx627" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.221829 5014 status_manager.go:851] "Failed to get status for pod" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" pod="openshift-marketplace/community-operators-kx627" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kx627\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.222733 5014 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.223066 5014 status_manager.go:851] "Failed to get status for pod" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" pod="openshift-marketplace/certified-operators-sqfvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sqfvs\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.235474 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r5h8g" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.236087 5014 status_manager.go:851] "Failed to get status for pod" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" pod="openshift-marketplace/community-operators-kx627" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kx627\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.236435 5014 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.236760 5014 status_manager.go:851] "Failed to get status for pod" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" pod="openshift-marketplace/certified-operators-sqfvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sqfvs\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.237090 5014 status_manager.go:851] "Failed to get status for pod" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" pod="openshift-marketplace/certified-operators-r5h8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-r5h8g\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: E0228 04:39:06.425119 5014 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: E0228 04:39:06.425854 5014 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: E0228 04:39:06.426091 5014 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: E0228 04:39:06.426310 5014 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: E0228 04:39:06.426516 5014 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.426547 5014 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 28 04:39:06 crc kubenswrapper[5014]: E0228 04:39:06.426723 5014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="200ms" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.432385 5014 generic.go:334] "Generic (PLEG): container finished" podID="8f5ec05c-1bc0-41ad-9135-05564f8e3192" containerID="b43bc55a5e0695fb54bfea9bfecc58aa6544d8b5004904ffba28a49556abd9d2" exitCode=0 Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.432472 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8f5ec05c-1bc0-41ad-9135-05564f8e3192","Type":"ContainerDied","Data":"b43bc55a5e0695fb54bfea9bfecc58aa6544d8b5004904ffba28a49556abd9d2"} Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.433202 5014 status_manager.go:851] "Failed to get status for pod" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" pod="openshift-marketplace/certified-operators-sqfvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sqfvs\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.433628 5014 status_manager.go:851] "Failed to get status for pod" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" pod="openshift-marketplace/certified-operators-r5h8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-r5h8g\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.433950 5014 status_manager.go:851] "Failed to get status for pod" podUID="8f5ec05c-1bc0-41ad-9135-05564f8e3192" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.434158 5014 status_manager.go:851] "Failed to get status for pod" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" pod="openshift-marketplace/community-operators-kx627" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kx627\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.434429 5014 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.437029 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" event={"ID":"f314ad52-83d6-47fd-931d-892a30cca689","Type":"ContainerDied","Data":"8b394d77c55a311a6bd59d64de892a5c71f25ce7def00f983d35ce49fd924a84"} Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.437071 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.437094 5014 scope.go:117] "RemoveContainer" containerID="ed219ae34e580b0cbabf3c1f7bff3799e73016b8a423fd24dea329c390507e00" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.437975 5014 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.438115 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kx627" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.438417 5014 status_manager.go:851] "Failed to get status for pod" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" pod="openshift-marketplace/certified-operators-sqfvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sqfvs\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.439083 5014 status_manager.go:851] "Failed to get status for pod" podUID="f314ad52-83d6-47fd-931d-892a30cca689" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-55f4b7c499-pw2kt\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.440102 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" event={"ID":"d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6","Type":"ContainerDied","Data":"597ac564228c35a6d7d94d1eab12f1acab68b6099ab83ace3b2cecbc68db482e"} Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.439629 5014 status_manager.go:851] "Failed to get status for pod" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" pod="openshift-marketplace/certified-operators-r5h8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-r5h8g\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.440946 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.441286 5014 status_manager.go:851] "Failed to get status for pod" podUID="8f5ec05c-1bc0-41ad-9135-05564f8e3192" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.441962 5014 status_manager.go:851] "Failed to get status for pod" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" pod="openshift-marketplace/community-operators-kx627" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kx627\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.442252 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c3dbe270c6bb92b849ea0bdf97b191a9ab418d8692769f0fe2205642664568e0"} Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.442349 5014 status_manager.go:851] "Failed to get status for pod" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" pod="openshift-marketplace/certified-operators-r5h8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-r5h8g\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.442633 5014 status_manager.go:851] "Failed to get status for pod" podUID="8f5ec05c-1bc0-41ad-9135-05564f8e3192" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.443196 5014 status_manager.go:851] "Failed to get status for pod" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" pod="openshift-marketplace/community-operators-kx627" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kx627\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.443532 5014 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.443762 5014 status_manager.go:851] "Failed to get status for pod" podUID="d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-855985cc94-t8kkh\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.444303 5014 status_manager.go:851] "Failed to get status for pod" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" pod="openshift-marketplace/certified-operators-sqfvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sqfvs\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.444945 5014 status_manager.go:851] "Failed to get status for pod" podUID="f314ad52-83d6-47fd-931d-892a30cca689" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-55f4b7c499-pw2kt\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.445343 5014 status_manager.go:851] "Failed to get status for pod" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" pod="openshift-marketplace/community-operators-kx627" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kx627\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.445583 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.445636 5014 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.446034 5014 status_manager.go:851] "Failed to get status for pod" podUID="d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-855985cc94-t8kkh\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.446448 5014 status_manager.go:851] "Failed to get status for pod" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" pod="openshift-marketplace/certified-operators-sqfvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sqfvs\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.446714 5014 status_manager.go:851] "Failed to get status for pod" podUID="f314ad52-83d6-47fd-931d-892a30cca689" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-55f4b7c499-pw2kt\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.446979 5014 status_manager.go:851] "Failed to get status for pod" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" pod="openshift-marketplace/certified-operators-r5h8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-r5h8g\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.447238 5014 status_manager.go:851] "Failed to get status for pod" podUID="8f5ec05c-1bc0-41ad-9135-05564f8e3192" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.448158 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.449147 5014 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="719151f8073cf997e9ca1087dcbb47e8f83f0e25a700251737c5a5d4b39bfde4" exitCode=0 Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.449231 5014 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea" exitCode=0 Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.449308 5014 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0" exitCode=0 Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.449367 5014 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b" exitCode=2 Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.456680 5014 scope.go:117] "RemoveContainer" containerID="33ae25eb6000da63aba2bb496d04650dea6d8c24f6219bb35a18a80f94d982b4" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.476651 5014 scope.go:117] "RemoveContainer" containerID="acf5b93e297babac22a86a8269bd8b01838975cdfb9560bbd7f92131f0cbbd24" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.504482 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9cznf" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.505286 5014 status_manager.go:851] "Failed to get status for pod" podUID="8f5ec05c-1bc0-41ad-9135-05564f8e3192" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.505898 5014 status_manager.go:851] "Failed to get status for pod" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" pod="openshift-marketplace/community-operators-kx627" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kx627\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.506309 5014 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.506633 5014 status_manager.go:851] "Failed to get status for pod" podUID="d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-855985cc94-t8kkh\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.506942 5014 status_manager.go:851] "Failed to get status for pod" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" pod="openshift-marketplace/certified-operators-sqfvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sqfvs\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.507331 5014 status_manager.go:851] "Failed to get status for pod" podUID="52079806-fc0c-4852-8150-0123d376c1b2" pod="openshift-marketplace/community-operators-9cznf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9cznf\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.507733 5014 status_manager.go:851] "Failed to get status for pod" podUID="f314ad52-83d6-47fd-931d-892a30cca689" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-55f4b7c499-pw2kt\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: I0228 04:39:06.507997 5014 status_manager.go:851] "Failed to get status for pod" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" pod="openshift-marketplace/certified-operators-r5h8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-r5h8g\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:06 crc kubenswrapper[5014]: E0228 04:39:06.627589 5014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="400ms" Feb 28 04:39:07 crc kubenswrapper[5014]: E0228 04:39:07.028382 5014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="800ms" Feb 28 04:39:07 crc kubenswrapper[5014]: I0228 04:39:07.458881 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6ecba43321d1feb84452675229c1f6c9df79eec65d6ebc35039770586fa4419e"} Feb 28 04:39:07 crc kubenswrapper[5014]: E0228 04:39:07.771834 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:39:07Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:39:07Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:39:07Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-28T04:39:07Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:07 crc kubenswrapper[5014]: E0228 04:39:07.772452 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:07 crc kubenswrapper[5014]: E0228 04:39:07.772967 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:07 crc kubenswrapper[5014]: E0228 04:39:07.773623 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:07 crc kubenswrapper[5014]: E0228 04:39:07.774220 5014 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:07 crc kubenswrapper[5014]: E0228 04:39:07.774246 5014 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 28 04:39:07 crc kubenswrapper[5014]: E0228 04:39:07.834674 5014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="1.6s" Feb 28 04:39:07 crc kubenswrapper[5014]: I0228 04:39:07.850398 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 28 04:39:07 crc kubenswrapper[5014]: I0228 04:39:07.851086 5014 status_manager.go:851] "Failed to get status for pod" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" pod="openshift-marketplace/community-operators-kx627" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kx627\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:07 crc kubenswrapper[5014]: I0228 04:39:07.851361 5014 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:07 crc kubenswrapper[5014]: I0228 04:39:07.851585 5014 status_manager.go:851] "Failed to get status for pod" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" pod="openshift-marketplace/certified-operators-sqfvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sqfvs\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:07 crc kubenswrapper[5014]: I0228 04:39:07.851845 5014 status_manager.go:851] "Failed to get status for pod" podUID="d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-855985cc94-t8kkh\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:07 crc kubenswrapper[5014]: I0228 04:39:07.852046 5014 status_manager.go:851] "Failed to get status for pod" podUID="52079806-fc0c-4852-8150-0123d376c1b2" pod="openshift-marketplace/community-operators-9cznf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9cznf\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:07 crc kubenswrapper[5014]: I0228 04:39:07.852224 5014 status_manager.go:851] "Failed to get status for pod" podUID="f314ad52-83d6-47fd-931d-892a30cca689" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-55f4b7c499-pw2kt\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:07 crc kubenswrapper[5014]: I0228 04:39:07.852379 5014 status_manager.go:851] "Failed to get status for pod" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" pod="openshift-marketplace/certified-operators-r5h8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-r5h8g\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:07 crc kubenswrapper[5014]: I0228 04:39:07.852539 5014 status_manager.go:851] "Failed to get status for pod" podUID="8f5ec05c-1bc0-41ad-9135-05564f8e3192" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:07 crc kubenswrapper[5014]: I0228 04:39:07.951257 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f5ec05c-1bc0-41ad-9135-05564f8e3192-kube-api-access\") pod \"8f5ec05c-1bc0-41ad-9135-05564f8e3192\" (UID: \"8f5ec05c-1bc0-41ad-9135-05564f8e3192\") " Feb 28 04:39:07 crc kubenswrapper[5014]: I0228 04:39:07.951370 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f5ec05c-1bc0-41ad-9135-05564f8e3192-var-lock\") pod \"8f5ec05c-1bc0-41ad-9135-05564f8e3192\" (UID: \"8f5ec05c-1bc0-41ad-9135-05564f8e3192\") " Feb 28 04:39:07 crc kubenswrapper[5014]: I0228 04:39:07.951417 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f5ec05c-1bc0-41ad-9135-05564f8e3192-kubelet-dir\") pod \"8f5ec05c-1bc0-41ad-9135-05564f8e3192\" (UID: \"8f5ec05c-1bc0-41ad-9135-05564f8e3192\") " Feb 28 04:39:07 crc kubenswrapper[5014]: I0228 04:39:07.951709 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f5ec05c-1bc0-41ad-9135-05564f8e3192-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8f5ec05c-1bc0-41ad-9135-05564f8e3192" (UID: "8f5ec05c-1bc0-41ad-9135-05564f8e3192"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:39:07 crc kubenswrapper[5014]: I0228 04:39:07.951750 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8f5ec05c-1bc0-41ad-9135-05564f8e3192-var-lock" (OuterVolumeSpecName: "var-lock") pod "8f5ec05c-1bc0-41ad-9135-05564f8e3192" (UID: "8f5ec05c-1bc0-41ad-9135-05564f8e3192"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:39:07 crc kubenswrapper[5014]: I0228 04:39:07.968599 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f5ec05c-1bc0-41ad-9135-05564f8e3192-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8f5ec05c-1bc0-41ad-9135-05564f8e3192" (UID: "8f5ec05c-1bc0-41ad-9135-05564f8e3192"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.052941 5014 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8f5ec05c-1bc0-41ad-9135-05564f8e3192-var-lock\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.052992 5014 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8f5ec05c-1bc0-41ad-9135-05564f8e3192-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.053005 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8f5ec05c-1bc0-41ad-9135-05564f8e3192-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.470650 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.470633 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8f5ec05c-1bc0-41ad-9135-05564f8e3192","Type":"ContainerDied","Data":"974818a21d93f8dda628b92213034001bdbbd91ea3c870f0e8321c8ee30129b1"} Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.470912 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="974818a21d93f8dda628b92213034001bdbbd91ea3c870f0e8321c8ee30129b1" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.476071 5014 status_manager.go:851] "Failed to get status for pod" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" pod="openshift-marketplace/certified-operators-r5h8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-r5h8g\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.476543 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.476743 5014 status_manager.go:851] "Failed to get status for pod" podUID="8f5ec05c-1bc0-41ad-9135-05564f8e3192" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.477538 5014 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e" exitCode=0 Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.477674 5014 status_manager.go:851] "Failed to get status for pod" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" pod="openshift-marketplace/community-operators-kx627" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kx627\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.478384 5014 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.478990 5014 status_manager.go:851] "Failed to get status for pod" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" pod="openshift-marketplace/certified-operators-sqfvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sqfvs\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.480194 5014 status_manager.go:851] "Failed to get status for pod" podUID="d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-855985cc94-t8kkh\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.480688 5014 status_manager.go:851] "Failed to get status for pod" podUID="52079806-fc0c-4852-8150-0123d376c1b2" pod="openshift-marketplace/community-operators-9cznf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9cznf\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.481223 5014 status_manager.go:851] "Failed to get status for pod" podUID="f314ad52-83d6-47fd-931d-892a30cca689" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-55f4b7c499-pw2kt\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.481763 5014 status_manager.go:851] "Failed to get status for pod" podUID="8f5ec05c-1bc0-41ad-9135-05564f8e3192" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.482259 5014 status_manager.go:851] "Failed to get status for pod" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" pod="openshift-marketplace/community-operators-kx627" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kx627\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.482776 5014 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.483301 5014 status_manager.go:851] "Failed to get status for pod" podUID="d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-855985cc94-t8kkh\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.483870 5014 status_manager.go:851] "Failed to get status for pod" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" pod="openshift-marketplace/certified-operators-sqfvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sqfvs\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.484339 5014 status_manager.go:851] "Failed to get status for pod" podUID="52079806-fc0c-4852-8150-0123d376c1b2" pod="openshift-marketplace/community-operators-9cznf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9cznf\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.484995 5014 status_manager.go:851] "Failed to get status for pod" podUID="f314ad52-83d6-47fd-931d-892a30cca689" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-55f4b7c499-pw2kt\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.485432 5014 status_manager.go:851] "Failed to get status for pod" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" pod="openshift-marketplace/certified-operators-r5h8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-r5h8g\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.741712 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5zq82" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.743279 5014 status_manager.go:851] "Failed to get status for pod" podUID="8f5ec05c-1bc0-41ad-9135-05564f8e3192" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.744558 5014 status_manager.go:851] "Failed to get status for pod" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" pod="openshift-marketplace/community-operators-kx627" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kx627\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.746208 5014 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.746923 5014 status_manager.go:851] "Failed to get status for pod" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" pod="openshift-marketplace/certified-operators-sqfvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sqfvs\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.747336 5014 status_manager.go:851] "Failed to get status for pod" podUID="d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-855985cc94-t8kkh\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.747896 5014 status_manager.go:851] "Failed to get status for pod" podUID="52079806-fc0c-4852-8150-0123d376c1b2" pod="openshift-marketplace/community-operators-9cznf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9cznf\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.748431 5014 status_manager.go:851] "Failed to get status for pod" podUID="f314ad52-83d6-47fd-931d-892a30cca689" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-55f4b7c499-pw2kt\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.749882 5014 status_manager.go:851] "Failed to get status for pod" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" pod="openshift-marketplace/certified-operators-r5h8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-r5h8g\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.755033 5014 status_manager.go:851] "Failed to get status for pod" podUID="bba9702f-9e04-46d4-9a98-92d5303383c4" pod="openshift-marketplace/redhat-operators-5zq82" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-5zq82\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.787997 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5zq82" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.788654 5014 status_manager.go:851] "Failed to get status for pod" podUID="8f5ec05c-1bc0-41ad-9135-05564f8e3192" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.789223 5014 status_manager.go:851] "Failed to get status for pod" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" pod="openshift-marketplace/community-operators-kx627" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kx627\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.790316 5014 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.791059 5014 status_manager.go:851] "Failed to get status for pod" podUID="d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-855985cc94-t8kkh\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.791449 5014 status_manager.go:851] "Failed to get status for pod" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" pod="openshift-marketplace/certified-operators-sqfvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sqfvs\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.791887 5014 status_manager.go:851] "Failed to get status for pod" podUID="52079806-fc0c-4852-8150-0123d376c1b2" pod="openshift-marketplace/community-operators-9cznf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9cznf\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.792358 5014 status_manager.go:851] "Failed to get status for pod" podUID="f314ad52-83d6-47fd-931d-892a30cca689" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-55f4b7c499-pw2kt\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.792705 5014 status_manager.go:851] "Failed to get status for pod" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" pod="openshift-marketplace/certified-operators-r5h8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-r5h8g\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.793051 5014 status_manager.go:851] "Failed to get status for pod" podUID="bba9702f-9e04-46d4-9a98-92d5303383c4" pod="openshift-marketplace/redhat-operators-5zq82" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-5zq82\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.956250 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.957446 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.958056 5014 status_manager.go:851] "Failed to get status for pod" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" pod="openshift-marketplace/certified-operators-sqfvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sqfvs\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.958488 5014 status_manager.go:851] "Failed to get status for pod" podUID="d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-855985cc94-t8kkh\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.958898 5014 status_manager.go:851] "Failed to get status for pod" podUID="52079806-fc0c-4852-8150-0123d376c1b2" pod="openshift-marketplace/community-operators-9cznf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9cznf\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.959217 5014 status_manager.go:851] "Failed to get status for pod" podUID="f314ad52-83d6-47fd-931d-892a30cca689" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-55f4b7c499-pw2kt\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.959629 5014 status_manager.go:851] "Failed to get status for pod" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" pod="openshift-marketplace/certified-operators-r5h8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-r5h8g\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.960089 5014 status_manager.go:851] "Failed to get status for pod" podUID="bba9702f-9e04-46d4-9a98-92d5303383c4" pod="openshift-marketplace/redhat-operators-5zq82" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-5zq82\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.960414 5014 status_manager.go:851] "Failed to get status for pod" podUID="8f5ec05c-1bc0-41ad-9135-05564f8e3192" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.960938 5014 status_manager.go:851] "Failed to get status for pod" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" pod="openshift-marketplace/community-operators-kx627" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kx627\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.961410 5014 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:08 crc kubenswrapper[5014]: I0228 04:39:08.961917 5014 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.066169 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.066304 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.066307 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.066378 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.066423 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.066571 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.067115 5014 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.067149 5014 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.067159 5014 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.138030 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hx7qb" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.138610 5014 status_manager.go:851] "Failed to get status for pod" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" pod="openshift-marketplace/certified-operators-sqfvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sqfvs\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.139197 5014 status_manager.go:851] "Failed to get status for pod" podUID="d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-855985cc94-t8kkh\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.139898 5014 status_manager.go:851] "Failed to get status for pod" podUID="52079806-fc0c-4852-8150-0123d376c1b2" pod="openshift-marketplace/community-operators-9cznf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9cznf\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.140509 5014 status_manager.go:851] "Failed to get status for pod" podUID="f314ad52-83d6-47fd-931d-892a30cca689" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-55f4b7c499-pw2kt\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.141776 5014 status_manager.go:851] "Failed to get status for pod" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" pod="openshift-marketplace/certified-operators-r5h8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-r5h8g\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.142159 5014 status_manager.go:851] "Failed to get status for pod" podUID="bba9702f-9e04-46d4-9a98-92d5303383c4" pod="openshift-marketplace/redhat-operators-5zq82" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-5zq82\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.142560 5014 status_manager.go:851] "Failed to get status for pod" podUID="8f5ec05c-1bc0-41ad-9135-05564f8e3192" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.143024 5014 status_manager.go:851] "Failed to get status for pod" podUID="3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4" pod="openshift-marketplace/redhat-operators-hx7qb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hx7qb\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.143395 5014 status_manager.go:851] "Failed to get status for pod" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" pod="openshift-marketplace/community-operators-kx627" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kx627\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.143750 5014 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.143980 5014 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.196841 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hx7qb" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.197858 5014 status_manager.go:851] "Failed to get status for pod" podUID="8f5ec05c-1bc0-41ad-9135-05564f8e3192" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.198229 5014 status_manager.go:851] "Failed to get status for pod" podUID="3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4" pod="openshift-marketplace/redhat-operators-hx7qb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hx7qb\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.198533 5014 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.198851 5014 status_manager.go:851] "Failed to get status for pod" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" pod="openshift-marketplace/community-operators-kx627" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kx627\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.199290 5014 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.199760 5014 status_manager.go:851] "Failed to get status for pod" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" pod="openshift-marketplace/certified-operators-sqfvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sqfvs\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.200195 5014 status_manager.go:851] "Failed to get status for pod" podUID="d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-855985cc94-t8kkh\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.200582 5014 status_manager.go:851] "Failed to get status for pod" podUID="52079806-fc0c-4852-8150-0123d376c1b2" pod="openshift-marketplace/community-operators-9cznf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9cznf\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.201028 5014 status_manager.go:851] "Failed to get status for pod" podUID="f314ad52-83d6-47fd-931d-892a30cca689" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-55f4b7c499-pw2kt\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.201418 5014 status_manager.go:851] "Failed to get status for pod" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" pod="openshift-marketplace/certified-operators-r5h8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-r5h8g\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.201925 5014 status_manager.go:851] "Failed to get status for pod" podUID="bba9702f-9e04-46d4-9a98-92d5303383c4" pod="openshift-marketplace/redhat-operators-5zq82" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-5zq82\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: E0228 04:39:09.408779 5014 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.150:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.18984f35dad090fd openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-28 04:39:06.093252861 +0000 UTC m=+334.763378781,LastTimestamp:2026-02-28 04:39:06.093252861 +0000 UTC m=+334.763378781,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 28 04:39:09 crc kubenswrapper[5014]: E0228 04:39:09.436132 5014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="3.2s" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.487141 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.488019 5014 scope.go:117] "RemoveContainer" containerID="719151f8073cf997e9ca1087dcbb47e8f83f0e25a700251737c5a5d4b39bfde4" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.488128 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.508399 5014 status_manager.go:851] "Failed to get status for pod" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" pod="openshift-marketplace/community-operators-kx627" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kx627\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.508765 5014 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.509121 5014 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.509379 5014 status_manager.go:851] "Failed to get status for pod" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" pod="openshift-marketplace/certified-operators-sqfvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sqfvs\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.509553 5014 scope.go:117] "RemoveContainer" containerID="48ed432868faa4404ece87593e9318fee907da76493b0f2d8bbffc29d4befcea" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.509638 5014 status_manager.go:851] "Failed to get status for pod" podUID="d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-855985cc94-t8kkh\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.509873 5014 status_manager.go:851] "Failed to get status for pod" podUID="52079806-fc0c-4852-8150-0123d376c1b2" pod="openshift-marketplace/community-operators-9cznf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9cznf\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.510297 5014 status_manager.go:851] "Failed to get status for pod" podUID="f314ad52-83d6-47fd-931d-892a30cca689" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-55f4b7c499-pw2kt\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.511140 5014 status_manager.go:851] "Failed to get status for pod" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" pod="openshift-marketplace/certified-operators-r5h8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-r5h8g\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.511374 5014 status_manager.go:851] "Failed to get status for pod" podUID="bba9702f-9e04-46d4-9a98-92d5303383c4" pod="openshift-marketplace/redhat-operators-5zq82" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-5zq82\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.511606 5014 status_manager.go:851] "Failed to get status for pod" podUID="8f5ec05c-1bc0-41ad-9135-05564f8e3192" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.511849 5014 status_manager.go:851] "Failed to get status for pod" podUID="3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4" pod="openshift-marketplace/redhat-operators-hx7qb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hx7qb\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.531240 5014 scope.go:117] "RemoveContainer" containerID="12419ba85cec10ca68bc8ea3a7c4d462b881900af430425d3d710f97624055e0" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.547609 5014 scope.go:117] "RemoveContainer" containerID="2cb8d1b18671c0da32a4dc9ab31c95e7fb4c8c05d5b98545d6b381cd051c3a9b" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.567667 5014 scope.go:117] "RemoveContainer" containerID="b8b85ac7cf25fca8421a4689cfa03f88de58fb964d9619e23624280fc2354c6e" Feb 28 04:39:09 crc kubenswrapper[5014]: I0228 04:39:09.602125 5014 scope.go:117] "RemoveContainer" containerID="408829efb31ffcc3b0d98e63f292d737476802796ed21a3ab1057881cd825246" Feb 28 04:39:10 crc kubenswrapper[5014]: I0228 04:39:10.081501 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:39:10 crc kubenswrapper[5014]: I0228 04:39:10.081649 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:39:10 crc kubenswrapper[5014]: W0228 04:39:10.082478 5014 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27253": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:39:10 crc kubenswrapper[5014]: E0228 04:39:10.082587 5014 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27253\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 28 04:39:10 crc kubenswrapper[5014]: W0228 04:39:10.082565 5014 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27251": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:39:10 crc kubenswrapper[5014]: E0228 04:39:10.082690 5014 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27251\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 28 04:39:10 crc kubenswrapper[5014]: I0228 04:39:10.183651 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:39:10 crc kubenswrapper[5014]: I0228 04:39:10.183910 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:39:10 crc kubenswrapper[5014]: I0228 04:39:10.184771 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 28 04:39:10 crc kubenswrapper[5014]: W0228 04:39:10.184864 5014 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27251": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:39:10 crc kubenswrapper[5014]: E0228 04:39:10.184994 5014 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27251\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 28 04:39:11 crc kubenswrapper[5014]: E0228 04:39:11.082282 5014 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: failed to sync configmap cache: timed out waiting for the condition Feb 28 04:39:11 crc kubenswrapper[5014]: E0228 04:39:11.082460 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 04:41:13.082420264 +0000 UTC m=+461.752546214 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync configmap cache: timed out waiting for the condition Feb 28 04:39:11 crc kubenswrapper[5014]: E0228 04:39:11.082561 5014 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: failed to sync secret cache: timed out waiting for the condition Feb 28 04:39:11 crc kubenswrapper[5014]: E0228 04:39:11.082676 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-28 04:41:13.082650381 +0000 UTC m=+461.752776331 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync secret cache: timed out waiting for the condition Feb 28 04:39:11 crc kubenswrapper[5014]: E0228 04:39:11.185249 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 28 04:39:11 crc kubenswrapper[5014]: E0228 04:39:11.185309 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 28 04:39:11 crc kubenswrapper[5014]: W0228 04:39:11.186180 5014 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27251": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:39:11 crc kubenswrapper[5014]: E0228 04:39:11.186331 5014 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27251\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 28 04:39:12 crc kubenswrapper[5014]: W0228 04:39:12.025956 5014 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27251": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:39:12 crc kubenswrapper[5014]: E0228 04:39:12.026071 5014 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27251\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 28 04:39:12 crc kubenswrapper[5014]: W0228 04:39:12.150395 5014 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27251": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:39:12 crc kubenswrapper[5014]: E0228 04:39:12.150513 5014 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27251\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 28 04:39:12 crc kubenswrapper[5014]: I0228 04:39:12.174264 5014 status_manager.go:851] "Failed to get status for pod" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" pod="openshift-marketplace/community-operators-kx627" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kx627\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:12 crc kubenswrapper[5014]: I0228 04:39:12.174635 5014 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:12 crc kubenswrapper[5014]: I0228 04:39:12.175352 5014 status_manager.go:851] "Failed to get status for pod" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" pod="openshift-marketplace/certified-operators-sqfvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sqfvs\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:12 crc kubenswrapper[5014]: I0228 04:39:12.175680 5014 status_manager.go:851] "Failed to get status for pod" podUID="d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-855985cc94-t8kkh\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:12 crc kubenswrapper[5014]: I0228 04:39:12.175977 5014 status_manager.go:851] "Failed to get status for pod" podUID="52079806-fc0c-4852-8150-0123d376c1b2" pod="openshift-marketplace/community-operators-9cznf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9cznf\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:12 crc kubenswrapper[5014]: I0228 04:39:12.176334 5014 status_manager.go:851] "Failed to get status for pod" podUID="f314ad52-83d6-47fd-931d-892a30cca689" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-55f4b7c499-pw2kt\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:12 crc kubenswrapper[5014]: I0228 04:39:12.176588 5014 status_manager.go:851] "Failed to get status for pod" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" pod="openshift-marketplace/certified-operators-r5h8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-r5h8g\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:12 crc kubenswrapper[5014]: I0228 04:39:12.177010 5014 status_manager.go:851] "Failed to get status for pod" podUID="bba9702f-9e04-46d4-9a98-92d5303383c4" pod="openshift-marketplace/redhat-operators-5zq82" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-5zq82\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:12 crc kubenswrapper[5014]: I0228 04:39:12.177474 5014 status_manager.go:851] "Failed to get status for pod" podUID="8f5ec05c-1bc0-41ad-9135-05564f8e3192" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:12 crc kubenswrapper[5014]: I0228 04:39:12.177709 5014 status_manager.go:851] "Failed to get status for pod" podUID="3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4" pod="openshift-marketplace/redhat-operators-hx7qb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hx7qb\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:12 crc kubenswrapper[5014]: E0228 04:39:12.185871 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 28 04:39:12 crc kubenswrapper[5014]: E0228 04:39:12.185916 5014 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: failed to sync configmap cache: timed out waiting for the condition Feb 28 04:39:12 crc kubenswrapper[5014]: E0228 04:39:12.185957 5014 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 28 04:39:12 crc kubenswrapper[5014]: E0228 04:39:12.185993 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-28 04:41:14.185971541 +0000 UTC m=+462.856097461 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : failed to sync configmap cache: timed out waiting for the condition Feb 28 04:39:12 crc kubenswrapper[5014]: E0228 04:39:12.185997 5014 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: failed to sync configmap cache: timed out waiting for the condition Feb 28 04:39:12 crc kubenswrapper[5014]: E0228 04:39:12.186082 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-28 04:41:14.186053184 +0000 UTC m=+462.856179134 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : failed to sync configmap cache: timed out waiting for the condition Feb 28 04:39:12 crc kubenswrapper[5014]: W0228 04:39:12.454644 5014 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27253": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:39:12 crc kubenswrapper[5014]: E0228 04:39:12.454757 5014 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27253\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 28 04:39:12 crc kubenswrapper[5014]: E0228 04:39:12.637108 5014 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.150:6443: connect: connection refused" interval="6.4s" Feb 28 04:39:13 crc kubenswrapper[5014]: W0228 04:39:13.206509 5014 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27251": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:39:13 crc kubenswrapper[5014]: E0228 04:39:13.206589 5014 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27251\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.171591 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.172886 5014 status_manager.go:851] "Failed to get status for pod" podUID="52079806-fc0c-4852-8150-0123d376c1b2" pod="openshift-marketplace/community-operators-9cznf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9cznf\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.173589 5014 status_manager.go:851] "Failed to get status for pod" podUID="f314ad52-83d6-47fd-931d-892a30cca689" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-55f4b7c499-pw2kt\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.174070 5014 status_manager.go:851] "Failed to get status for pod" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" pod="openshift-marketplace/certified-operators-r5h8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-r5h8g\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.174409 5014 status_manager.go:851] "Failed to get status for pod" podUID="bba9702f-9e04-46d4-9a98-92d5303383c4" pod="openshift-marketplace/redhat-operators-5zq82" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-5zq82\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.174732 5014 status_manager.go:851] "Failed to get status for pod" podUID="8f5ec05c-1bc0-41ad-9135-05564f8e3192" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.175189 5014 status_manager.go:851] "Failed to get status for pod" podUID="3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4" pod="openshift-marketplace/redhat-operators-hx7qb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hx7qb\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.175477 5014 status_manager.go:851] "Failed to get status for pod" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" pod="openshift-marketplace/community-operators-kx627" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kx627\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.175832 5014 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.176188 5014 status_manager.go:851] "Failed to get status for pod" podUID="d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-855985cc94-t8kkh\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.176535 5014 status_manager.go:851] "Failed to get status for pod" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" pod="openshift-marketplace/certified-operators-sqfvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sqfvs\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.192443 5014 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2b29aed6-db00-4c95-831f-f3230a6edd2d" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.192482 5014 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2b29aed6-db00-4c95-831f-f3230a6edd2d" Feb 28 04:39:16 crc kubenswrapper[5014]: E0228 04:39:16.192874 5014 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.193412 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:39:16 crc kubenswrapper[5014]: W0228 04:39:16.498463 5014 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27251": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:39:16 crc kubenswrapper[5014]: E0228 04:39:16.498991 5014 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27251\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 28 04:39:16 crc kubenswrapper[5014]: W0228 04:39:16.518038 5014 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27251": dial tcp 38.102.83.150:6443: connect: connection refused Feb 28 04:39:16 crc kubenswrapper[5014]: E0228 04:39:16.518117 5014 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27251\": dial tcp 38.102.83.150:6443: connect: connection refused" logger="UnhandledError" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.534642 5014 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="a4fc537c2eb5a80c64f18bfe34978a20d580f27e5eb5e0d6e0b4ae36a914a470" exitCode=0 Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.534694 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"a4fc537c2eb5a80c64f18bfe34978a20d580f27e5eb5e0d6e0b4ae36a914a470"} Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.534720 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"940f4db6b605ef7db4cd97a4265c40cfe99d82be3cfb5881fa5c649e25b2a9fc"} Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.534976 5014 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2b29aed6-db00-4c95-831f-f3230a6edd2d" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.534990 5014 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2b29aed6-db00-4c95-831f-f3230a6edd2d" Feb 28 04:39:16 crc kubenswrapper[5014]: E0228 04:39:16.535264 5014 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.535515 5014 status_manager.go:851] "Failed to get status for pod" podUID="3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4" pod="openshift-marketplace/redhat-operators-hx7qb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hx7qb\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.535948 5014 status_manager.go:851] "Failed to get status for pod" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" pod="openshift-marketplace/community-operators-kx627" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-kx627\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.536260 5014 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.536568 5014 status_manager.go:851] "Failed to get status for pod" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" pod="openshift-marketplace/certified-operators-sqfvs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-sqfvs\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.536849 5014 status_manager.go:851] "Failed to get status for pod" podUID="d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" pod="openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-855985cc94-t8kkh\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.537088 5014 status_manager.go:851] "Failed to get status for pod" podUID="52079806-fc0c-4852-8150-0123d376c1b2" pod="openshift-marketplace/community-operators-9cznf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-9cznf\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.537349 5014 status_manager.go:851] "Failed to get status for pod" podUID="f314ad52-83d6-47fd-931d-892a30cca689" pod="openshift-controller-manager/controller-manager-55f4b7c499-pw2kt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-55f4b7c499-pw2kt\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.537545 5014 status_manager.go:851] "Failed to get status for pod" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" pod="openshift-marketplace/certified-operators-r5h8g" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-r5h8g\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.537761 5014 status_manager.go:851] "Failed to get status for pod" podUID="bba9702f-9e04-46d4-9a98-92d5303383c4" pod="openshift-marketplace/redhat-operators-5zq82" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-5zq82\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:16 crc kubenswrapper[5014]: I0228 04:39:16.538002 5014 status_manager.go:851] "Failed to get status for pod" podUID="8f5ec05c-1bc0-41ad-9135-05564f8e3192" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.150:6443: connect: connection refused" Feb 28 04:39:17 crc kubenswrapper[5014]: E0228 04:39:17.187722 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-cqllr], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 28 04:39:17 crc kubenswrapper[5014]: E0228 04:39:17.194940 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-s2dwl], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 28 04:39:17 crc kubenswrapper[5014]: E0228 04:39:17.200215 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert nginx-conf], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 28 04:39:17 crc kubenswrapper[5014]: I0228 04:39:17.554332 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d49e9a218dbeff60eda4e3e99603c80b103444e8ece3b97498514b4f90603594"} Feb 28 04:39:17 crc kubenswrapper[5014]: I0228 04:39:17.554399 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f07a3f6c1cbb0532c73f71d5b018000ce90e1840ca296a502709234d6ef2e5a1"} Feb 28 04:39:17 crc kubenswrapper[5014]: I0228 04:39:17.554425 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b84bde9210dc73f43351e2af60be877be1fea2138dbfc6dbc6847376c1c57bc3"} Feb 28 04:39:17 crc kubenswrapper[5014]: I0228 04:39:17.554438 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"03eade2b38e1c56438bbf534bafe4734f134c5fd98915b54b54aef67c3aab6f8"} Feb 28 04:39:18 crc kubenswrapper[5014]: I0228 04:39:18.562223 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 28 04:39:18 crc kubenswrapper[5014]: I0228 04:39:18.563761 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 28 04:39:18 crc kubenswrapper[5014]: I0228 04:39:18.563835 5014 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="d814b990a8048b944b6ca2b8a1aa5b585368ce3a5d89b7b0993e92a291fa9fa9" exitCode=1 Feb 28 04:39:18 crc kubenswrapper[5014]: I0228 04:39:18.563894 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"d814b990a8048b944b6ca2b8a1aa5b585368ce3a5d89b7b0993e92a291fa9fa9"} Feb 28 04:39:18 crc kubenswrapper[5014]: I0228 04:39:18.564383 5014 scope.go:117] "RemoveContainer" containerID="d814b990a8048b944b6ca2b8a1aa5b585368ce3a5d89b7b0993e92a291fa9fa9" Feb 28 04:39:18 crc kubenswrapper[5014]: I0228 04:39:18.568788 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"eba32d9b6ce79f1519ba0ab2ecba8d6ca9b1fc2336e651457a945f5544a26235"} Feb 28 04:39:18 crc kubenswrapper[5014]: I0228 04:39:18.569096 5014 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2b29aed6-db00-4c95-831f-f3230a6edd2d" Feb 28 04:39:18 crc kubenswrapper[5014]: I0228 04:39:18.569125 5014 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2b29aed6-db00-4c95-831f-f3230a6edd2d" Feb 28 04:39:18 crc kubenswrapper[5014]: I0228 04:39:18.569317 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:39:18 crc kubenswrapper[5014]: I0228 04:39:18.807003 5014 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 04:39:19 crc kubenswrapper[5014]: I0228 04:39:19.580408 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 28 04:39:19 crc kubenswrapper[5014]: I0228 04:39:19.581300 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 28 04:39:19 crc kubenswrapper[5014]: I0228 04:39:19.581355 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bae997c9cb983548cab2c8268178e95870e9ca342e865db2d80d68ce70c0f972"} Feb 28 04:39:21 crc kubenswrapper[5014]: I0228 04:39:21.194857 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:39:21 crc kubenswrapper[5014]: I0228 04:39:21.194959 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:39:21 crc kubenswrapper[5014]: I0228 04:39:21.204759 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:39:22 crc kubenswrapper[5014]: I0228 04:39:22.817151 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" podUID="56dc15d6-ebc6-459c-9847-c9f8c66dffe4" containerName="oauth-openshift" containerID="cri-o://403e7d3890786e07592e30b30e741dafceab7b1b05e069dc14623eb1bc63c372" gracePeriod=15 Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.263835 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.273724 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.279183 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-user-template-login\") pod \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.279247 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-audit-dir\") pod \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.279281 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-serving-cert\") pod \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.279312 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-user-template-provider-selection\") pod \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.279336 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-cliconfig\") pod \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.279368 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnzgw\" (UniqueName: \"kubernetes.io/projected/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-kube-api-access-cnzgw\") pod \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.279394 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-session\") pod \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.279448 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-service-ca\") pod \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.279474 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-trusted-ca-bundle\") pod \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.279509 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-user-idp-0-file-data\") pod \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.279536 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-audit-policies\") pod \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.279560 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "56dc15d6-ebc6-459c-9847-c9f8c66dffe4" (UID: "56dc15d6-ebc6-459c-9847-c9f8c66dffe4"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.279590 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-ocp-branding-template\") pod \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.279632 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-user-template-error\") pod \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.279661 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-router-certs\") pod \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\" (UID: \"56dc15d6-ebc6-459c-9847-c9f8c66dffe4\") " Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.280070 5014 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.280306 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "56dc15d6-ebc6-459c-9847-c9f8c66dffe4" (UID: "56dc15d6-ebc6-459c-9847-c9f8c66dffe4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.280450 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "56dc15d6-ebc6-459c-9847-c9f8c66dffe4" (UID: "56dc15d6-ebc6-459c-9847-c9f8c66dffe4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.281041 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "56dc15d6-ebc6-459c-9847-c9f8c66dffe4" (UID: "56dc15d6-ebc6-459c-9847-c9f8c66dffe4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.281252 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "56dc15d6-ebc6-459c-9847-c9f8c66dffe4" (UID: "56dc15d6-ebc6-459c-9847-c9f8c66dffe4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.287599 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "56dc15d6-ebc6-459c-9847-c9f8c66dffe4" (UID: "56dc15d6-ebc6-459c-9847-c9f8c66dffe4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.288120 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "56dc15d6-ebc6-459c-9847-c9f8c66dffe4" (UID: "56dc15d6-ebc6-459c-9847-c9f8c66dffe4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.288491 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "56dc15d6-ebc6-459c-9847-c9f8c66dffe4" (UID: "56dc15d6-ebc6-459c-9847-c9f8c66dffe4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.290450 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "56dc15d6-ebc6-459c-9847-c9f8c66dffe4" (UID: "56dc15d6-ebc6-459c-9847-c9f8c66dffe4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.290765 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "56dc15d6-ebc6-459c-9847-c9f8c66dffe4" (UID: "56dc15d6-ebc6-459c-9847-c9f8c66dffe4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.291459 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "56dc15d6-ebc6-459c-9847-c9f8c66dffe4" (UID: "56dc15d6-ebc6-459c-9847-c9f8c66dffe4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.291618 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-kube-api-access-cnzgw" (OuterVolumeSpecName: "kube-api-access-cnzgw") pod "56dc15d6-ebc6-459c-9847-c9f8c66dffe4" (UID: "56dc15d6-ebc6-459c-9847-c9f8c66dffe4"). InnerVolumeSpecName "kube-api-access-cnzgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.293285 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "56dc15d6-ebc6-459c-9847-c9f8c66dffe4" (UID: "56dc15d6-ebc6-459c-9847-c9f8c66dffe4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.307831 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.311874 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "56dc15d6-ebc6-459c-9847-c9f8c66dffe4" (UID: "56dc15d6-ebc6-459c-9847-c9f8c66dffe4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.381187 5014 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.381237 5014 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.381259 5014 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.381276 5014 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.381293 5014 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.381310 5014 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.381329 5014 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.381345 5014 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.381401 5014 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.381415 5014 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.381429 5014 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.381442 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnzgw\" (UniqueName: \"kubernetes.io/projected/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-kube-api-access-cnzgw\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.381453 5014 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/56dc15d6-ebc6-459c-9847-c9f8c66dffe4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.576932 5014 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.610396 5014 generic.go:334] "Generic (PLEG): container finished" podID="56dc15d6-ebc6-459c-9847-c9f8c66dffe4" containerID="403e7d3890786e07592e30b30e741dafceab7b1b05e069dc14623eb1bc63c372" exitCode=0 Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.610439 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" event={"ID":"56dc15d6-ebc6-459c-9847-c9f8c66dffe4","Type":"ContainerDied","Data":"403e7d3890786e07592e30b30e741dafceab7b1b05e069dc14623eb1bc63c372"} Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.610466 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" event={"ID":"56dc15d6-ebc6-459c-9847-c9f8c66dffe4","Type":"ContainerDied","Data":"1cddd47f9a8e7604446601577fa2295d5fd2ac61d275ef7b1c4c914287234d62"} Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.610486 5014 scope.go:117] "RemoveContainer" containerID="403e7d3890786e07592e30b30e741dafceab7b1b05e069dc14623eb1bc63c372" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.610509 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fkqnd" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.630242 5014 scope.go:117] "RemoveContainer" containerID="403e7d3890786e07592e30b30e741dafceab7b1b05e069dc14623eb1bc63c372" Feb 28 04:39:23 crc kubenswrapper[5014]: E0228 04:39:23.630767 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"403e7d3890786e07592e30b30e741dafceab7b1b05e069dc14623eb1bc63c372\": container with ID starting with 403e7d3890786e07592e30b30e741dafceab7b1b05e069dc14623eb1bc63c372 not found: ID does not exist" containerID="403e7d3890786e07592e30b30e741dafceab7b1b05e069dc14623eb1bc63c372" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.630823 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"403e7d3890786e07592e30b30e741dafceab7b1b05e069dc14623eb1bc63c372"} err="failed to get container status \"403e7d3890786e07592e30b30e741dafceab7b1b05e069dc14623eb1bc63c372\": rpc error: code = NotFound desc = could not find container \"403e7d3890786e07592e30b30e741dafceab7b1b05e069dc14623eb1bc63c372\": container with ID starting with 403e7d3890786e07592e30b30e741dafceab7b1b05e069dc14623eb1bc63c372 not found: ID does not exist" Feb 28 04:39:23 crc kubenswrapper[5014]: I0228 04:39:23.838521 5014 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7c0b112c-8427-4888-901a-df8cc34c0da1" Feb 28 04:39:24 crc kubenswrapper[5014]: I0228 04:39:24.435344 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 28 04:39:24 crc kubenswrapper[5014]: I0228 04:39:24.619989 5014 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2b29aed6-db00-4c95-831f-f3230a6edd2d" Feb 28 04:39:24 crc kubenswrapper[5014]: I0228 04:39:24.620041 5014 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2b29aed6-db00-4c95-831f-f3230a6edd2d" Feb 28 04:39:24 crc kubenswrapper[5014]: I0228 04:39:24.625788 5014 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7c0b112c-8427-4888-901a-df8cc34c0da1" Feb 28 04:39:24 crc kubenswrapper[5014]: I0228 04:39:24.626604 5014 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://03eade2b38e1c56438bbf534bafe4734f134c5fd98915b54b54aef67c3aab6f8" Feb 28 04:39:24 crc kubenswrapper[5014]: I0228 04:39:24.626654 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:39:25 crc kubenswrapper[5014]: I0228 04:39:25.313503 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 04:39:25 crc kubenswrapper[5014]: I0228 04:39:25.626045 5014 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2b29aed6-db00-4c95-831f-f3230a6edd2d" Feb 28 04:39:25 crc kubenswrapper[5014]: I0228 04:39:25.626107 5014 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2b29aed6-db00-4c95-831f-f3230a6edd2d" Feb 28 04:39:25 crc kubenswrapper[5014]: I0228 04:39:25.629911 5014 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="7c0b112c-8427-4888-901a-df8cc34c0da1" Feb 28 04:39:26 crc kubenswrapper[5014]: I0228 04:39:26.662353 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 04:39:26 crc kubenswrapper[5014]: I0228 04:39:26.669744 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 04:39:27 crc kubenswrapper[5014]: I0228 04:39:27.679054 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 28 04:39:29 crc kubenswrapper[5014]: I0228 04:39:29.750882 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 28 04:39:29 crc kubenswrapper[5014]: I0228 04:39:29.823445 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 28 04:39:29 crc kubenswrapper[5014]: I0228 04:39:29.881607 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 28 04:39:29 crc kubenswrapper[5014]: I0228 04:39:29.929368 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 28 04:39:30 crc kubenswrapper[5014]: I0228 04:39:30.087299 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 28 04:39:30 crc kubenswrapper[5014]: I0228 04:39:30.088078 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 28 04:39:30 crc kubenswrapper[5014]: I0228 04:39:30.171404 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:39:30 crc kubenswrapper[5014]: I0228 04:39:30.213270 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 28 04:39:30 crc kubenswrapper[5014]: I0228 04:39:30.286597 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 28 04:39:30 crc kubenswrapper[5014]: I0228 04:39:30.644479 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 28 04:39:30 crc kubenswrapper[5014]: I0228 04:39:30.718171 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 28 04:39:30 crc kubenswrapper[5014]: I0228 04:39:30.907871 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 28 04:39:31 crc kubenswrapper[5014]: I0228 04:39:31.154253 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 28 04:39:31 crc kubenswrapper[5014]: I0228 04:39:31.423785 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 28 04:39:31 crc kubenswrapper[5014]: I0228 04:39:31.827613 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 28 04:39:31 crc kubenswrapper[5014]: I0228 04:39:31.843963 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 28 04:39:32 crc kubenswrapper[5014]: I0228 04:39:32.041182 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 28 04:39:32 crc kubenswrapper[5014]: I0228 04:39:32.087540 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 28 04:39:32 crc kubenswrapper[5014]: I0228 04:39:32.171233 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:39:32 crc kubenswrapper[5014]: I0228 04:39:32.185971 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:39:32 crc kubenswrapper[5014]: I0228 04:39:32.194475 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 28 04:39:32 crc kubenswrapper[5014]: I0228 04:39:32.230364 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 28 04:39:32 crc kubenswrapper[5014]: I0228 04:39:32.310563 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 28 04:39:32 crc kubenswrapper[5014]: I0228 04:39:32.329299 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 28 04:39:32 crc kubenswrapper[5014]: I0228 04:39:32.351501 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 28 04:39:32 crc kubenswrapper[5014]: I0228 04:39:32.363966 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 28 04:39:32 crc kubenswrapper[5014]: I0228 04:39:32.573797 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 28 04:39:32 crc kubenswrapper[5014]: I0228 04:39:32.624000 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 28 04:39:32 crc kubenswrapper[5014]: I0228 04:39:32.626763 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 28 04:39:32 crc kubenswrapper[5014]: I0228 04:39:32.733065 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 28 04:39:32 crc kubenswrapper[5014]: I0228 04:39:32.744660 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 28 04:39:32 crc kubenswrapper[5014]: I0228 04:39:32.887279 5014 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 28 04:39:32 crc kubenswrapper[5014]: I0228 04:39:32.979555 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 28 04:39:33 crc kubenswrapper[5014]: I0228 04:39:33.245773 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 28 04:39:33 crc kubenswrapper[5014]: I0228 04:39:33.730961 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 28 04:39:33 crc kubenswrapper[5014]: I0228 04:39:33.808113 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 28 04:39:33 crc kubenswrapper[5014]: I0228 04:39:33.852557 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 28 04:39:33 crc kubenswrapper[5014]: I0228 04:39:33.903880 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 28 04:39:33 crc kubenswrapper[5014]: I0228 04:39:33.983757 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 28 04:39:34 crc kubenswrapper[5014]: I0228 04:39:34.000616 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 28 04:39:34 crc kubenswrapper[5014]: I0228 04:39:34.012901 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 28 04:39:34 crc kubenswrapper[5014]: I0228 04:39:34.077954 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 28 04:39:34 crc kubenswrapper[5014]: I0228 04:39:34.249855 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 28 04:39:34 crc kubenswrapper[5014]: I0228 04:39:34.356829 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 28 04:39:34 crc kubenswrapper[5014]: I0228 04:39:34.599856 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 28 04:39:34 crc kubenswrapper[5014]: I0228 04:39:34.706621 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 28 04:39:34 crc kubenswrapper[5014]: I0228 04:39:34.790612 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 28 04:39:35 crc kubenswrapper[5014]: I0228 04:39:35.004543 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 28 04:39:35 crc kubenswrapper[5014]: I0228 04:39:35.053509 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 28 04:39:35 crc kubenswrapper[5014]: I0228 04:39:35.073980 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 28 04:39:35 crc kubenswrapper[5014]: I0228 04:39:35.090480 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 28 04:39:35 crc kubenswrapper[5014]: I0228 04:39:35.170570 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 28 04:39:35 crc kubenswrapper[5014]: I0228 04:39:35.319510 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 28 04:39:35 crc kubenswrapper[5014]: I0228 04:39:35.342042 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 28 04:39:35 crc kubenswrapper[5014]: I0228 04:39:35.379798 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 28 04:39:35 crc kubenswrapper[5014]: I0228 04:39:35.452298 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 28 04:39:35 crc kubenswrapper[5014]: I0228 04:39:35.629182 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 28 04:39:35 crc kubenswrapper[5014]: I0228 04:39:35.630877 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 28 04:39:36 crc kubenswrapper[5014]: I0228 04:39:36.029237 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 28 04:39:36 crc kubenswrapper[5014]: I0228 04:39:36.062265 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 28 04:39:36 crc kubenswrapper[5014]: I0228 04:39:36.499866 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 28 04:39:36 crc kubenswrapper[5014]: I0228 04:39:36.566668 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 28 04:39:36 crc kubenswrapper[5014]: I0228 04:39:36.709420 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 28 04:39:36 crc kubenswrapper[5014]: I0228 04:39:36.872058 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 28 04:39:37 crc kubenswrapper[5014]: I0228 04:39:37.274299 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 28 04:39:37 crc kubenswrapper[5014]: I0228 04:39:37.539877 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 28 04:39:37 crc kubenswrapper[5014]: I0228 04:39:37.857062 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 28 04:39:37 crc kubenswrapper[5014]: I0228 04:39:37.900492 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 28 04:39:38 crc kubenswrapper[5014]: I0228 04:39:38.123821 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 28 04:39:38 crc kubenswrapper[5014]: I0228 04:39:38.512743 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 28 04:39:38 crc kubenswrapper[5014]: I0228 04:39:38.515476 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 28 04:39:38 crc kubenswrapper[5014]: I0228 04:39:38.641365 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 28 04:39:38 crc kubenswrapper[5014]: I0228 04:39:38.751315 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 28 04:39:38 crc kubenswrapper[5014]: I0228 04:39:38.846456 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 28 04:39:38 crc kubenswrapper[5014]: I0228 04:39:38.874783 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 28 04:39:39 crc kubenswrapper[5014]: I0228 04:39:39.062176 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 28 04:39:39 crc kubenswrapper[5014]: I0228 04:39:39.210531 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 28 04:39:39 crc kubenswrapper[5014]: I0228 04:39:39.391870 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 28 04:39:39 crc kubenswrapper[5014]: I0228 04:39:39.541138 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 28 04:39:39 crc kubenswrapper[5014]: I0228 04:39:39.587527 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 28 04:39:39 crc kubenswrapper[5014]: I0228 04:39:39.717289 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 28 04:39:39 crc kubenswrapper[5014]: I0228 04:39:39.831512 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 28 04:39:39 crc kubenswrapper[5014]: I0228 04:39:39.855084 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 28 04:39:39 crc kubenswrapper[5014]: I0228 04:39:39.871045 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 28 04:39:39 crc kubenswrapper[5014]: I0228 04:39:39.958047 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 28 04:39:40 crc kubenswrapper[5014]: I0228 04:39:40.364159 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 28 04:39:40 crc kubenswrapper[5014]: I0228 04:39:40.483425 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 28 04:39:40 crc kubenswrapper[5014]: I0228 04:39:40.660942 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 28 04:39:40 crc kubenswrapper[5014]: I0228 04:39:40.697185 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 28 04:39:40 crc kubenswrapper[5014]: I0228 04:39:40.795797 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 28 04:39:40 crc kubenswrapper[5014]: I0228 04:39:40.831745 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 28 04:39:40 crc kubenswrapper[5014]: I0228 04:39:40.850498 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 28 04:39:40 crc kubenswrapper[5014]: I0228 04:39:40.912606 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 28 04:39:41 crc kubenswrapper[5014]: I0228 04:39:41.047614 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 28 04:39:41 crc kubenswrapper[5014]: I0228 04:39:41.237770 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 28 04:39:41 crc kubenswrapper[5014]: I0228 04:39:41.353167 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 28 04:39:41 crc kubenswrapper[5014]: I0228 04:39:41.471494 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 28 04:39:41 crc kubenswrapper[5014]: I0228 04:39:41.473616 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 28 04:39:41 crc kubenswrapper[5014]: I0228 04:39:41.634302 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 28 04:39:41 crc kubenswrapper[5014]: I0228 04:39:41.745349 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 28 04:39:41 crc kubenswrapper[5014]: I0228 04:39:41.799862 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 28 04:39:41 crc kubenswrapper[5014]: I0228 04:39:41.918233 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 28 04:39:41 crc kubenswrapper[5014]: I0228 04:39:41.956834 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 28 04:39:42 crc kubenswrapper[5014]: I0228 04:39:42.179236 5014 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 28 04:39:42 crc kubenswrapper[5014]: I0228 04:39:42.227176 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 28 04:39:42 crc kubenswrapper[5014]: I0228 04:39:42.287315 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 28 04:39:42 crc kubenswrapper[5014]: I0228 04:39:42.303787 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 28 04:39:42 crc kubenswrapper[5014]: I0228 04:39:42.381632 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 28 04:39:42 crc kubenswrapper[5014]: I0228 04:39:42.488933 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 28 04:39:42 crc kubenswrapper[5014]: I0228 04:39:42.493311 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 28 04:39:42 crc kubenswrapper[5014]: I0228 04:39:42.516109 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 28 04:39:42 crc kubenswrapper[5014]: I0228 04:39:42.612080 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 28 04:39:42 crc kubenswrapper[5014]: I0228 04:39:42.688616 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 28 04:39:42 crc kubenswrapper[5014]: I0228 04:39:42.702291 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 28 04:39:42 crc kubenswrapper[5014]: I0228 04:39:42.820846 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 28 04:39:42 crc kubenswrapper[5014]: I0228 04:39:42.827091 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 28 04:39:42 crc kubenswrapper[5014]: I0228 04:39:42.901569 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 28 04:39:42 crc kubenswrapper[5014]: I0228 04:39:42.928543 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 28 04:39:43 crc kubenswrapper[5014]: I0228 04:39:43.010702 5014 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 28 04:39:43 crc kubenswrapper[5014]: I0228 04:39:43.028568 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 28 04:39:43 crc kubenswrapper[5014]: I0228 04:39:43.053857 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 28 04:39:43 crc kubenswrapper[5014]: I0228 04:39:43.117367 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 28 04:39:43 crc kubenswrapper[5014]: I0228 04:39:43.280748 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 28 04:39:43 crc kubenswrapper[5014]: I0228 04:39:43.609401 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 28 04:39:43 crc kubenswrapper[5014]: I0228 04:39:43.626923 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 28 04:39:43 crc kubenswrapper[5014]: I0228 04:39:43.664538 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 28 04:39:43 crc kubenswrapper[5014]: I0228 04:39:43.915860 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 28 04:39:44 crc kubenswrapper[5014]: I0228 04:39:44.164060 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 28 04:39:44 crc kubenswrapper[5014]: I0228 04:39:44.286095 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 28 04:39:44 crc kubenswrapper[5014]: I0228 04:39:44.321942 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 28 04:39:44 crc kubenswrapper[5014]: I0228 04:39:44.470134 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 28 04:39:44 crc kubenswrapper[5014]: I0228 04:39:44.494995 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 28 04:39:44 crc kubenswrapper[5014]: I0228 04:39:44.632888 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 28 04:39:44 crc kubenswrapper[5014]: I0228 04:39:44.680225 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 28 04:39:44 crc kubenswrapper[5014]: I0228 04:39:44.689263 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 28 04:39:45 crc kubenswrapper[5014]: I0228 04:39:45.017521 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 28 04:39:45 crc kubenswrapper[5014]: I0228 04:39:45.047553 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 28 04:39:45 crc kubenswrapper[5014]: I0228 04:39:45.174001 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 28 04:39:45 crc kubenswrapper[5014]: I0228 04:39:45.217237 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 28 04:39:45 crc kubenswrapper[5014]: I0228 04:39:45.308549 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 28 04:39:45 crc kubenswrapper[5014]: I0228 04:39:45.361561 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 28 04:39:45 crc kubenswrapper[5014]: I0228 04:39:45.404916 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 28 04:39:45 crc kubenswrapper[5014]: I0228 04:39:45.430484 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 28 04:39:45 crc kubenswrapper[5014]: I0228 04:39:45.471054 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 28 04:39:45 crc kubenswrapper[5014]: I0228 04:39:45.490289 5014 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 28 04:39:45 crc kubenswrapper[5014]: I0228 04:39:45.610128 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 28 04:39:45 crc kubenswrapper[5014]: I0228 04:39:45.690182 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 28 04:39:45 crc kubenswrapper[5014]: I0228 04:39:45.723256 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 28 04:39:46 crc kubenswrapper[5014]: I0228 04:39:46.023956 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 28 04:39:46 crc kubenswrapper[5014]: I0228 04:39:46.288275 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 28 04:39:46 crc kubenswrapper[5014]: I0228 04:39:46.290698 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 28 04:39:46 crc kubenswrapper[5014]: I0228 04:39:46.385920 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 28 04:39:46 crc kubenswrapper[5014]: I0228 04:39:46.487037 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 28 04:39:46 crc kubenswrapper[5014]: I0228 04:39:46.527942 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 28 04:39:46 crc kubenswrapper[5014]: I0228 04:39:46.576527 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 28 04:39:46 crc kubenswrapper[5014]: I0228 04:39:46.590122 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 28 04:39:46 crc kubenswrapper[5014]: I0228 04:39:46.683143 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 28 04:39:46 crc kubenswrapper[5014]: I0228 04:39:46.743240 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 28 04:39:46 crc kubenswrapper[5014]: I0228 04:39:46.743950 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 28 04:39:46 crc kubenswrapper[5014]: I0228 04:39:46.775057 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 28 04:39:46 crc kubenswrapper[5014]: I0228 04:39:46.864101 5014 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 28 04:39:46 crc kubenswrapper[5014]: I0228 04:39:46.878076 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 28 04:39:46 crc kubenswrapper[5014]: I0228 04:39:46.879293 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 28 04:39:46 crc kubenswrapper[5014]: I0228 04:39:46.964217 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 28 04:39:47 crc kubenswrapper[5014]: I0228 04:39:47.114297 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 28 04:39:47 crc kubenswrapper[5014]: I0228 04:39:47.143772 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 28 04:39:47 crc kubenswrapper[5014]: I0228 04:39:47.294998 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 28 04:39:47 crc kubenswrapper[5014]: I0228 04:39:47.388935 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 28 04:39:47 crc kubenswrapper[5014]: I0228 04:39:47.414061 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 28 04:39:47 crc kubenswrapper[5014]: I0228 04:39:47.421501 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 28 04:39:47 crc kubenswrapper[5014]: I0228 04:39:47.587261 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 28 04:39:47 crc kubenswrapper[5014]: I0228 04:39:47.697514 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 28 04:39:47 crc kubenswrapper[5014]: I0228 04:39:47.771636 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 28 04:39:47 crc kubenswrapper[5014]: I0228 04:39:47.889279 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 28 04:39:47 crc kubenswrapper[5014]: I0228 04:39:47.915327 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 28 04:39:47 crc kubenswrapper[5014]: I0228 04:39:47.974051 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.016370 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.030119 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.031645 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.049741 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.109795 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.131734 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.141133 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.145325 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.188240 5014 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.195388 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=43.195365539 podStartE2EDuration="43.195365539s" podCreationTimestamp="2026-02-28 04:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:39:23.637168963 +0000 UTC m=+352.307294883" watchObservedRunningTime="2026-02-28 04:39:48.195365539 +0000 UTC m=+376.865491479" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.196388 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-controller-manager/controller-manager-55f4b7c499-pw2kt","openshift-authentication/oauth-openshift-558db77b4-fkqnd","openshift-route-controller-manager/route-controller-manager-855985cc94-t8kkh"] Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.196471 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-route-controller-manager/route-controller-manager-75ffccfd9c-bkl65","openshift-controller-manager/controller-manager-5b7d77b486-rln47"] Feb 28 04:39:48 crc kubenswrapper[5014]: E0228 04:39:48.196725 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56dc15d6-ebc6-459c-9847-c9f8c66dffe4" containerName="oauth-openshift" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.196754 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="56dc15d6-ebc6-459c-9847-c9f8c66dffe4" containerName="oauth-openshift" Feb 28 04:39:48 crc kubenswrapper[5014]: E0228 04:39:48.196775 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f5ec05c-1bc0-41ad-9135-05564f8e3192" containerName="installer" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.196787 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f5ec05c-1bc0-41ad-9135-05564f8e3192" containerName="installer" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.196980 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f5ec05c-1bc0-41ad-9135-05564f8e3192" containerName="installer" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.197003 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="56dc15d6-ebc6-459c-9847-c9f8c66dffe4" containerName="oauth-openshift" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.197096 5014 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2b29aed6-db00-4c95-831f-f3230a6edd2d" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.197133 5014 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="2b29aed6-db00-4c95-831f-f3230a6edd2d" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.197684 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75ffccfd9c-bkl65" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.198627 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b7d77b486-rln47" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.200220 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.201505 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.201880 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.202533 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.202609 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.202994 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.203121 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.203283 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.203424 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.203943 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.204205 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.204523 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.210334 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.226081 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.234155 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=25.234125435 podStartE2EDuration="25.234125435s" podCreationTimestamp="2026-02-28 04:39:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:39:48.227397165 +0000 UTC m=+376.897523095" watchObservedRunningTime="2026-02-28 04:39:48.234125435 +0000 UTC m=+376.904251345" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.242757 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4cc4bd6-5458-4e2e-b097-ad63d5960272-serving-cert\") pod \"route-controller-manager-75ffccfd9c-bkl65\" (UID: \"c4cc4bd6-5458-4e2e-b097-ad63d5960272\") " pod="openshift-route-controller-manager/route-controller-manager-75ffccfd9c-bkl65" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.242822 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4cc4bd6-5458-4e2e-b097-ad63d5960272-config\") pod \"route-controller-manager-75ffccfd9c-bkl65\" (UID: \"c4cc4bd6-5458-4e2e-b097-ad63d5960272\") " pod="openshift-route-controller-manager/route-controller-manager-75ffccfd9c-bkl65" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.242846 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm6s2\" (UniqueName: \"kubernetes.io/projected/c4cc4bd6-5458-4e2e-b097-ad63d5960272-kube-api-access-lm6s2\") pod \"route-controller-manager-75ffccfd9c-bkl65\" (UID: \"c4cc4bd6-5458-4e2e-b097-ad63d5960272\") " pod="openshift-route-controller-manager/route-controller-manager-75ffccfd9c-bkl65" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.242868 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4cc4bd6-5458-4e2e-b097-ad63d5960272-client-ca\") pod \"route-controller-manager-75ffccfd9c-bkl65\" (UID: \"c4cc4bd6-5458-4e2e-b097-ad63d5960272\") " pod="openshift-route-controller-manager/route-controller-manager-75ffccfd9c-bkl65" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.243051 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/036f1568-3f0b-48f4-82de-4acc9ea6c668-config\") pod \"controller-manager-5b7d77b486-rln47\" (UID: \"036f1568-3f0b-48f4-82de-4acc9ea6c668\") " pod="openshift-controller-manager/controller-manager-5b7d77b486-rln47" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.243241 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/036f1568-3f0b-48f4-82de-4acc9ea6c668-client-ca\") pod \"controller-manager-5b7d77b486-rln47\" (UID: \"036f1568-3f0b-48f4-82de-4acc9ea6c668\") " pod="openshift-controller-manager/controller-manager-5b7d77b486-rln47" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.243366 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gsjz\" (UniqueName: \"kubernetes.io/projected/036f1568-3f0b-48f4-82de-4acc9ea6c668-kube-api-access-6gsjz\") pod \"controller-manager-5b7d77b486-rln47\" (UID: \"036f1568-3f0b-48f4-82de-4acc9ea6c668\") " pod="openshift-controller-manager/controller-manager-5b7d77b486-rln47" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.243565 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/036f1568-3f0b-48f4-82de-4acc9ea6c668-serving-cert\") pod \"controller-manager-5b7d77b486-rln47\" (UID: \"036f1568-3f0b-48f4-82de-4acc9ea6c668\") " pod="openshift-controller-manager/controller-manager-5b7d77b486-rln47" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.243607 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/036f1568-3f0b-48f4-82de-4acc9ea6c668-proxy-ca-bundles\") pod \"controller-manager-5b7d77b486-rln47\" (UID: \"036f1568-3f0b-48f4-82de-4acc9ea6c668\") " pod="openshift-controller-manager/controller-manager-5b7d77b486-rln47" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.345380 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/036f1568-3f0b-48f4-82de-4acc9ea6c668-serving-cert\") pod \"controller-manager-5b7d77b486-rln47\" (UID: \"036f1568-3f0b-48f4-82de-4acc9ea6c668\") " pod="openshift-controller-manager/controller-manager-5b7d77b486-rln47" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.345495 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/036f1568-3f0b-48f4-82de-4acc9ea6c668-proxy-ca-bundles\") pod \"controller-manager-5b7d77b486-rln47\" (UID: \"036f1568-3f0b-48f4-82de-4acc9ea6c668\") " pod="openshift-controller-manager/controller-manager-5b7d77b486-rln47" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.345541 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4cc4bd6-5458-4e2e-b097-ad63d5960272-serving-cert\") pod \"route-controller-manager-75ffccfd9c-bkl65\" (UID: \"c4cc4bd6-5458-4e2e-b097-ad63d5960272\") " pod="openshift-route-controller-manager/route-controller-manager-75ffccfd9c-bkl65" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.345565 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4cc4bd6-5458-4e2e-b097-ad63d5960272-config\") pod \"route-controller-manager-75ffccfd9c-bkl65\" (UID: \"c4cc4bd6-5458-4e2e-b097-ad63d5960272\") " pod="openshift-route-controller-manager/route-controller-manager-75ffccfd9c-bkl65" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.345585 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm6s2\" (UniqueName: \"kubernetes.io/projected/c4cc4bd6-5458-4e2e-b097-ad63d5960272-kube-api-access-lm6s2\") pod \"route-controller-manager-75ffccfd9c-bkl65\" (UID: \"c4cc4bd6-5458-4e2e-b097-ad63d5960272\") " pod="openshift-route-controller-manager/route-controller-manager-75ffccfd9c-bkl65" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.345610 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4cc4bd6-5458-4e2e-b097-ad63d5960272-client-ca\") pod \"route-controller-manager-75ffccfd9c-bkl65\" (UID: \"c4cc4bd6-5458-4e2e-b097-ad63d5960272\") " pod="openshift-route-controller-manager/route-controller-manager-75ffccfd9c-bkl65" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.345641 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/036f1568-3f0b-48f4-82de-4acc9ea6c668-config\") pod \"controller-manager-5b7d77b486-rln47\" (UID: \"036f1568-3f0b-48f4-82de-4acc9ea6c668\") " pod="openshift-controller-manager/controller-manager-5b7d77b486-rln47" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.345690 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/036f1568-3f0b-48f4-82de-4acc9ea6c668-client-ca\") pod \"controller-manager-5b7d77b486-rln47\" (UID: \"036f1568-3f0b-48f4-82de-4acc9ea6c668\") " pod="openshift-controller-manager/controller-manager-5b7d77b486-rln47" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.345725 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gsjz\" (UniqueName: \"kubernetes.io/projected/036f1568-3f0b-48f4-82de-4acc9ea6c668-kube-api-access-6gsjz\") pod \"controller-manager-5b7d77b486-rln47\" (UID: \"036f1568-3f0b-48f4-82de-4acc9ea6c668\") " pod="openshift-controller-manager/controller-manager-5b7d77b486-rln47" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.346468 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.347435 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/036f1568-3f0b-48f4-82de-4acc9ea6c668-proxy-ca-bundles\") pod \"controller-manager-5b7d77b486-rln47\" (UID: \"036f1568-3f0b-48f4-82de-4acc9ea6c668\") " pod="openshift-controller-manager/controller-manager-5b7d77b486-rln47" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.347489 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c4cc4bd6-5458-4e2e-b097-ad63d5960272-client-ca\") pod \"route-controller-manager-75ffccfd9c-bkl65\" (UID: \"c4cc4bd6-5458-4e2e-b097-ad63d5960272\") " pod="openshift-route-controller-manager/route-controller-manager-75ffccfd9c-bkl65" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.349070 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4cc4bd6-5458-4e2e-b097-ad63d5960272-config\") pod \"route-controller-manager-75ffccfd9c-bkl65\" (UID: \"c4cc4bd6-5458-4e2e-b097-ad63d5960272\") " pod="openshift-route-controller-manager/route-controller-manager-75ffccfd9c-bkl65" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.349173 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/036f1568-3f0b-48f4-82de-4acc9ea6c668-config\") pod \"controller-manager-5b7d77b486-rln47\" (UID: \"036f1568-3f0b-48f4-82de-4acc9ea6c668\") " pod="openshift-controller-manager/controller-manager-5b7d77b486-rln47" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.349323 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/036f1568-3f0b-48f4-82de-4acc9ea6c668-client-ca\") pod \"controller-manager-5b7d77b486-rln47\" (UID: \"036f1568-3f0b-48f4-82de-4acc9ea6c668\") " pod="openshift-controller-manager/controller-manager-5b7d77b486-rln47" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.352388 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4cc4bd6-5458-4e2e-b097-ad63d5960272-serving-cert\") pod \"route-controller-manager-75ffccfd9c-bkl65\" (UID: \"c4cc4bd6-5458-4e2e-b097-ad63d5960272\") " pod="openshift-route-controller-manager/route-controller-manager-75ffccfd9c-bkl65" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.361507 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/036f1568-3f0b-48f4-82de-4acc9ea6c668-serving-cert\") pod \"controller-manager-5b7d77b486-rln47\" (UID: \"036f1568-3f0b-48f4-82de-4acc9ea6c668\") " pod="openshift-controller-manager/controller-manager-5b7d77b486-rln47" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.378051 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm6s2\" (UniqueName: \"kubernetes.io/projected/c4cc4bd6-5458-4e2e-b097-ad63d5960272-kube-api-access-lm6s2\") pod \"route-controller-manager-75ffccfd9c-bkl65\" (UID: \"c4cc4bd6-5458-4e2e-b097-ad63d5960272\") " pod="openshift-route-controller-manager/route-controller-manager-75ffccfd9c-bkl65" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.379950 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gsjz\" (UniqueName: \"kubernetes.io/projected/036f1568-3f0b-48f4-82de-4acc9ea6c668-kube-api-access-6gsjz\") pod \"controller-manager-5b7d77b486-rln47\" (UID: \"036f1568-3f0b-48f4-82de-4acc9ea6c668\") " pod="openshift-controller-manager/controller-manager-5b7d77b486-rln47" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.386469 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.401248 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.511830 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.550002 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75ffccfd9c-bkl65" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.566642 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5b7d77b486-rln47" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.615896 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.659486 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.671018 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.773904 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75ffccfd9c-bkl65"] Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.823343 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5b7d77b486-rln47"] Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.825279 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 28 04:39:48 crc kubenswrapper[5014]: I0228 04:39:48.962362 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.032031 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.073324 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.261606 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-599f5c589d-z2484"] Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.262459 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.268474 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.268539 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.269076 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.270943 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.271216 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.271878 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.272220 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.272452 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.272700 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.273020 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.273296 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.273299 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.275740 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-599f5c589d-z2484"] Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.284640 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.291752 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.292381 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.321310 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.361599 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.361668 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-system-router-certs\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.361705 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc8fn\" (UniqueName: \"kubernetes.io/projected/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-kube-api-access-cc8fn\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.361745 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-user-template-error\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.361771 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-user-template-login\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.361791 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.361843 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-audit-policies\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.361867 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.361885 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-audit-dir\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.361912 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.361935 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-system-service-ca\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.361958 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-system-session\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.361979 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.361999 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.397491 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.463126 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-audit-dir\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.463220 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.463258 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-system-service-ca\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.463286 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-audit-dir\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.463296 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-system-session\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.463636 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.463691 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.463800 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.464029 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-system-router-certs\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.464154 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc8fn\" (UniqueName: \"kubernetes.io/projected/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-kube-api-access-cc8fn\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.464215 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-user-template-error\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.464251 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-user-template-login\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.464287 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.464350 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-audit-policies\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.464395 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.465717 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-audit-policies\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.466259 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.466295 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.467318 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-system-service-ca\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.471130 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-system-session\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.473078 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.476351 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-user-template-error\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.476712 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.476745 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-system-router-certs\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.477289 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.477650 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.486617 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-v4-0-config-user-template-login\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.491169 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc8fn\" (UniqueName: \"kubernetes.io/projected/b4eb68bb-998c-497d-a05c-2d39ed0dd81e-kube-api-access-cc8fn\") pod \"oauth-openshift-599f5c589d-z2484\" (UID: \"b4eb68bb-998c-497d-a05c-2d39ed0dd81e\") " pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.602426 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.603458 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.609691 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.772259 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b7d77b486-rln47" event={"ID":"036f1568-3f0b-48f4-82de-4acc9ea6c668","Type":"ContainerStarted","Data":"f0d24af79cc88632129b69da6aee977374e922e15e145dbd8928080474e6b4cf"} Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.772652 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5b7d77b486-rln47" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.772671 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5b7d77b486-rln47" event={"ID":"036f1568-3f0b-48f4-82de-4acc9ea6c668","Type":"ContainerStarted","Data":"e06940b0e84b40463f886cca1dfc80a6204fd2a7c9f6516874d706100a0ae7cb"} Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.777514 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75ffccfd9c-bkl65" event={"ID":"c4cc4bd6-5458-4e2e-b097-ad63d5960272","Type":"ContainerStarted","Data":"f8bb93f6cb229ba339ed8cba711a2cb611872c7ad6372e376bb5059b75eaa602"} Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.777555 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75ffccfd9c-bkl65" event={"ID":"c4cc4bd6-5458-4e2e-b097-ad63d5960272","Type":"ContainerStarted","Data":"1012a57ac217aaeb872c5736ce864c05fd5ac7bd0e9a52358f4714ff8112bef1"} Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.778138 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-75ffccfd9c-bkl65" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.778619 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5b7d77b486-rln47" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.781744 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75ffccfd9c-bkl65" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.794861 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5b7d77b486-rln47" podStartSLOduration=44.794839026 podStartE2EDuration="44.794839026s" podCreationTimestamp="2026-02-28 04:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:39:49.794337912 +0000 UTC m=+378.464463822" watchObservedRunningTime="2026-02-28 04:39:49.794839026 +0000 UTC m=+378.464964936" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.833107 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.835609 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.853496 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75ffccfd9c-bkl65" podStartSLOduration=44.853473104 podStartE2EDuration="44.853473104s" podCreationTimestamp="2026-02-28 04:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:39:49.85334212 +0000 UTC m=+378.523468030" watchObservedRunningTime="2026-02-28 04:39:49.853473104 +0000 UTC m=+378.523599014" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.877661 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.883540 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-599f5c589d-z2484"] Feb 28 04:39:49 crc kubenswrapper[5014]: I0228 04:39:49.925486 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 28 04:39:50 crc kubenswrapper[5014]: I0228 04:39:50.069120 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 28 04:39:50 crc kubenswrapper[5014]: I0228 04:39:50.188189 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56dc15d6-ebc6-459c-9847-c9f8c66dffe4" path="/var/lib/kubelet/pods/56dc15d6-ebc6-459c-9847-c9f8c66dffe4/volumes" Feb 28 04:39:50 crc kubenswrapper[5014]: I0228 04:39:50.189161 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6" path="/var/lib/kubelet/pods/d3113356-a2d6-4e49-a19c-6d1e7c0e0fd6/volumes" Feb 28 04:39:50 crc kubenswrapper[5014]: I0228 04:39:50.189801 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f314ad52-83d6-47fd-931d-892a30cca689" path="/var/lib/kubelet/pods/f314ad52-83d6-47fd-931d-892a30cca689/volumes" Feb 28 04:39:50 crc kubenswrapper[5014]: I0228 04:39:50.317542 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 28 04:39:50 crc kubenswrapper[5014]: I0228 04:39:50.508841 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 28 04:39:50 crc kubenswrapper[5014]: I0228 04:39:50.589311 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 28 04:39:50 crc kubenswrapper[5014]: I0228 04:39:50.787345 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" event={"ID":"b4eb68bb-998c-497d-a05c-2d39ed0dd81e","Type":"ContainerStarted","Data":"c02853d360b7ea0d72bf8f943a11c5e7bf737e3d5072e294d907006516dfa1bf"} Feb 28 04:39:50 crc kubenswrapper[5014]: I0228 04:39:50.787400 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" event={"ID":"b4eb68bb-998c-497d-a05c-2d39ed0dd81e","Type":"ContainerStarted","Data":"1f5993ae72335b0eb7c5495fcb4544fc913210e44e1c166937d3bf8b6a81fb1f"} Feb 28 04:39:50 crc kubenswrapper[5014]: I0228 04:39:50.787682 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:50 crc kubenswrapper[5014]: I0228 04:39:50.794409 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" Feb 28 04:39:50 crc kubenswrapper[5014]: I0228 04:39:50.817660 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-599f5c589d-z2484" podStartSLOduration=53.81763604 podStartE2EDuration="53.81763604s" podCreationTimestamp="2026-02-28 04:38:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:39:50.811756114 +0000 UTC m=+379.481882044" watchObservedRunningTime="2026-02-28 04:39:50.81763604 +0000 UTC m=+379.487761960" Feb 28 04:39:50 crc kubenswrapper[5014]: I0228 04:39:50.832472 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 28 04:39:51 crc kubenswrapper[5014]: I0228 04:39:51.088542 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 28 04:39:51 crc kubenswrapper[5014]: I0228 04:39:51.335021 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 28 04:39:51 crc kubenswrapper[5014]: I0228 04:39:51.472649 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 28 04:39:51 crc kubenswrapper[5014]: I0228 04:39:51.646508 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 28 04:39:51 crc kubenswrapper[5014]: I0228 04:39:51.776985 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 28 04:39:51 crc kubenswrapper[5014]: I0228 04:39:51.869629 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 28 04:39:51 crc kubenswrapper[5014]: I0228 04:39:51.987212 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 28 04:39:52 crc kubenswrapper[5014]: I0228 04:39:52.002917 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 28 04:39:52 crc kubenswrapper[5014]: I0228 04:39:52.203698 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 28 04:39:52 crc kubenswrapper[5014]: I0228 04:39:52.223284 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 28 04:39:52 crc kubenswrapper[5014]: I0228 04:39:52.618479 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 28 04:39:52 crc kubenswrapper[5014]: I0228 04:39:52.911046 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 28 04:39:53 crc kubenswrapper[5014]: I0228 04:39:53.145398 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 28 04:39:53 crc kubenswrapper[5014]: I0228 04:39:53.795603 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 28 04:39:53 crc kubenswrapper[5014]: I0228 04:39:53.894677 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 28 04:39:57 crc kubenswrapper[5014]: I0228 04:39:57.471685 5014 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 28 04:39:57 crc kubenswrapper[5014]: I0228 04:39:57.472599 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://6ecba43321d1feb84452675229c1f6c9df79eec65d6ebc35039770586fa4419e" gracePeriod=5 Feb 28 04:40:00 crc kubenswrapper[5014]: I0228 04:40:00.196959 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537560-g4xd4"] Feb 28 04:40:00 crc kubenswrapper[5014]: E0228 04:40:00.197940 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 28 04:40:00 crc kubenswrapper[5014]: I0228 04:40:00.197970 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 28 04:40:00 crc kubenswrapper[5014]: I0228 04:40:00.198220 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 28 04:40:00 crc kubenswrapper[5014]: I0228 04:40:00.198989 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537560-g4xd4" Feb 28 04:40:00 crc kubenswrapper[5014]: I0228 04:40:00.204201 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537560-g4xd4"] Feb 28 04:40:00 crc kubenswrapper[5014]: I0228 04:40:00.206147 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:40:00 crc kubenswrapper[5014]: I0228 04:40:00.206167 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:40:00 crc kubenswrapper[5014]: I0228 04:40:00.206363 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 04:40:00 crc kubenswrapper[5014]: I0228 04:40:00.232894 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8fzq\" (UniqueName: \"kubernetes.io/projected/56bd259d-1322-4f57-aa09-1384b22a54a9-kube-api-access-f8fzq\") pod \"auto-csr-approver-29537560-g4xd4\" (UID: \"56bd259d-1322-4f57-aa09-1384b22a54a9\") " pod="openshift-infra/auto-csr-approver-29537560-g4xd4" Feb 28 04:40:00 crc kubenswrapper[5014]: I0228 04:40:00.334729 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8fzq\" (UniqueName: \"kubernetes.io/projected/56bd259d-1322-4f57-aa09-1384b22a54a9-kube-api-access-f8fzq\") pod \"auto-csr-approver-29537560-g4xd4\" (UID: \"56bd259d-1322-4f57-aa09-1384b22a54a9\") " pod="openshift-infra/auto-csr-approver-29537560-g4xd4" Feb 28 04:40:00 crc kubenswrapper[5014]: I0228 04:40:00.374762 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8fzq\" (UniqueName: \"kubernetes.io/projected/56bd259d-1322-4f57-aa09-1384b22a54a9-kube-api-access-f8fzq\") pod \"auto-csr-approver-29537560-g4xd4\" (UID: \"56bd259d-1322-4f57-aa09-1384b22a54a9\") " pod="openshift-infra/auto-csr-approver-29537560-g4xd4" Feb 28 04:40:00 crc kubenswrapper[5014]: I0228 04:40:00.529374 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537560-g4xd4" Feb 28 04:40:00 crc kubenswrapper[5014]: I0228 04:40:00.964700 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537560-g4xd4"] Feb 28 04:40:00 crc kubenswrapper[5014]: W0228 04:40:00.970983 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod56bd259d_1322_4f57_aa09_1384b22a54a9.slice/crio-6539ba1de4e1b5caa56563263cb3e718add87f60e0dedbedd79e7417c5fdf8c0 WatchSource:0}: Error finding container 6539ba1de4e1b5caa56563263cb3e718add87f60e0dedbedd79e7417c5fdf8c0: Status 404 returned error can't find the container with id 6539ba1de4e1b5caa56563263cb3e718add87f60e0dedbedd79e7417c5fdf8c0 Feb 28 04:40:01 crc kubenswrapper[5014]: I0228 04:40:01.870706 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537560-g4xd4" event={"ID":"56bd259d-1322-4f57-aa09-1384b22a54a9","Type":"ContainerStarted","Data":"6539ba1de4e1b5caa56563263cb3e718add87f60e0dedbedd79e7417c5fdf8c0"} Feb 28 04:40:02 crc kubenswrapper[5014]: I0228 04:40:02.884298 5014 generic.go:334] "Generic (PLEG): container finished" podID="56bd259d-1322-4f57-aa09-1384b22a54a9" containerID="f35d3e277d6f66460b6e9019dae8498ea93fff9c90babcf14830e0335f65c0b6" exitCode=0 Feb 28 04:40:02 crc kubenswrapper[5014]: I0228 04:40:02.884431 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537560-g4xd4" event={"ID":"56bd259d-1322-4f57-aa09-1384b22a54a9","Type":"ContainerDied","Data":"f35d3e277d6f66460b6e9019dae8498ea93fff9c90babcf14830e0335f65c0b6"} Feb 28 04:40:02 crc kubenswrapper[5014]: I0228 04:40:02.892229 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 28 04:40:02 crc kubenswrapper[5014]: I0228 04:40:02.892299 5014 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="6ecba43321d1feb84452675229c1f6c9df79eec65d6ebc35039770586fa4419e" exitCode=137 Feb 28 04:40:03 crc kubenswrapper[5014]: I0228 04:40:03.079736 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 28 04:40:03 crc kubenswrapper[5014]: I0228 04:40:03.079858 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 04:40:03 crc kubenswrapper[5014]: I0228 04:40:03.174521 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 28 04:40:03 crc kubenswrapper[5014]: I0228 04:40:03.174614 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 28 04:40:03 crc kubenswrapper[5014]: I0228 04:40:03.174768 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 28 04:40:03 crc kubenswrapper[5014]: I0228 04:40:03.174818 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:40:03 crc kubenswrapper[5014]: I0228 04:40:03.174886 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:40:03 crc kubenswrapper[5014]: I0228 04:40:03.174886 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:40:03 crc kubenswrapper[5014]: I0228 04:40:03.174844 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 28 04:40:03 crc kubenswrapper[5014]: I0228 04:40:03.175029 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 28 04:40:03 crc kubenswrapper[5014]: I0228 04:40:03.175143 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:40:03 crc kubenswrapper[5014]: I0228 04:40:03.175567 5014 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:03 crc kubenswrapper[5014]: I0228 04:40:03.175586 5014 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:03 crc kubenswrapper[5014]: I0228 04:40:03.175601 5014 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:03 crc kubenswrapper[5014]: I0228 04:40:03.175614 5014 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:03 crc kubenswrapper[5014]: I0228 04:40:03.188725 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:40:03 crc kubenswrapper[5014]: I0228 04:40:03.277344 5014 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:03 crc kubenswrapper[5014]: I0228 04:40:03.901099 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 28 04:40:03 crc kubenswrapper[5014]: I0228 04:40:03.901630 5014 scope.go:117] "RemoveContainer" containerID="6ecba43321d1feb84452675229c1f6c9df79eec65d6ebc35039770586fa4419e" Feb 28 04:40:03 crc kubenswrapper[5014]: I0228 04:40:03.901690 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 28 04:40:04 crc kubenswrapper[5014]: I0228 04:40:04.182166 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 28 04:40:04 crc kubenswrapper[5014]: I0228 04:40:04.182498 5014 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 28 04:40:04 crc kubenswrapper[5014]: I0228 04:40:04.198831 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 28 04:40:04 crc kubenswrapper[5014]: I0228 04:40:04.198876 5014 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c5b73218-f648-4b41-bc89-6eadd4ec8398" Feb 28 04:40:04 crc kubenswrapper[5014]: I0228 04:40:04.205201 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 28 04:40:04 crc kubenswrapper[5014]: I0228 04:40:04.205240 5014 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="c5b73218-f648-4b41-bc89-6eadd4ec8398" Feb 28 04:40:04 crc kubenswrapper[5014]: I0228 04:40:04.254350 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537560-g4xd4" Feb 28 04:40:04 crc kubenswrapper[5014]: I0228 04:40:04.293565 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8fzq\" (UniqueName: \"kubernetes.io/projected/56bd259d-1322-4f57-aa09-1384b22a54a9-kube-api-access-f8fzq\") pod \"56bd259d-1322-4f57-aa09-1384b22a54a9\" (UID: \"56bd259d-1322-4f57-aa09-1384b22a54a9\") " Feb 28 04:40:04 crc kubenswrapper[5014]: I0228 04:40:04.304049 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56bd259d-1322-4f57-aa09-1384b22a54a9-kube-api-access-f8fzq" (OuterVolumeSpecName: "kube-api-access-f8fzq") pod "56bd259d-1322-4f57-aa09-1384b22a54a9" (UID: "56bd259d-1322-4f57-aa09-1384b22a54a9"). InnerVolumeSpecName "kube-api-access-f8fzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:40:04 crc kubenswrapper[5014]: I0228 04:40:04.395206 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8fzq\" (UniqueName: \"kubernetes.io/projected/56bd259d-1322-4f57-aa09-1384b22a54a9-kube-api-access-f8fzq\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:04 crc kubenswrapper[5014]: I0228 04:40:04.909346 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537560-g4xd4" event={"ID":"56bd259d-1322-4f57-aa09-1384b22a54a9","Type":"ContainerDied","Data":"6539ba1de4e1b5caa56563263cb3e718add87f60e0dedbedd79e7417c5fdf8c0"} Feb 28 04:40:04 crc kubenswrapper[5014]: I0228 04:40:04.909400 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6539ba1de4e1b5caa56563263cb3e718add87f60e0dedbedd79e7417c5fdf8c0" Feb 28 04:40:04 crc kubenswrapper[5014]: I0228 04:40:04.909433 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537560-g4xd4" Feb 28 04:40:23 crc kubenswrapper[5014]: I0228 04:40:23.852155 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kx627"] Feb 28 04:40:23 crc kubenswrapper[5014]: I0228 04:40:23.853161 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kx627" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" containerName="registry-server" containerID="cri-o://aa0668cb3d249afac3762bcf32c8f92bde7a8a061c5114c229821ab7ce4beb62" gracePeriod=2 Feb 28 04:40:24 crc kubenswrapper[5014]: I0228 04:40:24.037364 5014 generic.go:334] "Generic (PLEG): container finished" podID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" containerID="aa0668cb3d249afac3762bcf32c8f92bde7a8a061c5114c229821ab7ce4beb62" exitCode=0 Feb 28 04:40:24 crc kubenswrapper[5014]: I0228 04:40:24.037623 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kx627" event={"ID":"bd99ec6a-5237-42f9-81ad-bd813d262c6d","Type":"ContainerDied","Data":"aa0668cb3d249afac3762bcf32c8f92bde7a8a061c5114c229821ab7ce4beb62"} Feb 28 04:40:24 crc kubenswrapper[5014]: I0228 04:40:24.048508 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r5h8g"] Feb 28 04:40:24 crc kubenswrapper[5014]: I0228 04:40:24.049074 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r5h8g" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" containerName="registry-server" containerID="cri-o://c0cc124af7da90090bb62569c5a149cbecfaceeadf8e6ff8397672a358fc9852" gracePeriod=2 Feb 28 04:40:24 crc kubenswrapper[5014]: I0228 04:40:24.397739 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kx627" Feb 28 04:40:24 crc kubenswrapper[5014]: I0228 04:40:24.473212 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd99ec6a-5237-42f9-81ad-bd813d262c6d-catalog-content\") pod \"bd99ec6a-5237-42f9-81ad-bd813d262c6d\" (UID: \"bd99ec6a-5237-42f9-81ad-bd813d262c6d\") " Feb 28 04:40:24 crc kubenswrapper[5014]: I0228 04:40:24.473362 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd99ec6a-5237-42f9-81ad-bd813d262c6d-utilities\") pod \"bd99ec6a-5237-42f9-81ad-bd813d262c6d\" (UID: \"bd99ec6a-5237-42f9-81ad-bd813d262c6d\") " Feb 28 04:40:24 crc kubenswrapper[5014]: I0228 04:40:24.473396 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvjgv\" (UniqueName: \"kubernetes.io/projected/bd99ec6a-5237-42f9-81ad-bd813d262c6d-kube-api-access-nvjgv\") pod \"bd99ec6a-5237-42f9-81ad-bd813d262c6d\" (UID: \"bd99ec6a-5237-42f9-81ad-bd813d262c6d\") " Feb 28 04:40:24 crc kubenswrapper[5014]: I0228 04:40:24.474222 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd99ec6a-5237-42f9-81ad-bd813d262c6d-utilities" (OuterVolumeSpecName: "utilities") pod "bd99ec6a-5237-42f9-81ad-bd813d262c6d" (UID: "bd99ec6a-5237-42f9-81ad-bd813d262c6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:40:24 crc kubenswrapper[5014]: I0228 04:40:24.474665 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd99ec6a-5237-42f9-81ad-bd813d262c6d-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:24 crc kubenswrapper[5014]: I0228 04:40:24.483051 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd99ec6a-5237-42f9-81ad-bd813d262c6d-kube-api-access-nvjgv" (OuterVolumeSpecName: "kube-api-access-nvjgv") pod "bd99ec6a-5237-42f9-81ad-bd813d262c6d" (UID: "bd99ec6a-5237-42f9-81ad-bd813d262c6d"). InnerVolumeSpecName "kube-api-access-nvjgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:40:24 crc kubenswrapper[5014]: I0228 04:40:24.497142 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5h8g" Feb 28 04:40:24 crc kubenswrapper[5014]: I0228 04:40:24.540105 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd99ec6a-5237-42f9-81ad-bd813d262c6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd99ec6a-5237-42f9-81ad-bd813d262c6d" (UID: "bd99ec6a-5237-42f9-81ad-bd813d262c6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:40:24 crc kubenswrapper[5014]: I0228 04:40:24.575663 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bdb5d29-5a4c-4358-a276-58efd08a8655-utilities\") pod \"7bdb5d29-5a4c-4358-a276-58efd08a8655\" (UID: \"7bdb5d29-5a4c-4358-a276-58efd08a8655\") " Feb 28 04:40:24 crc kubenswrapper[5014]: I0228 04:40:24.575757 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bdb5d29-5a4c-4358-a276-58efd08a8655-catalog-content\") pod \"7bdb5d29-5a4c-4358-a276-58efd08a8655\" (UID: \"7bdb5d29-5a4c-4358-a276-58efd08a8655\") " Feb 28 04:40:24 crc kubenswrapper[5014]: I0228 04:40:24.575783 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqtc5\" (UniqueName: \"kubernetes.io/projected/7bdb5d29-5a4c-4358-a276-58efd08a8655-kube-api-access-rqtc5\") pod \"7bdb5d29-5a4c-4358-a276-58efd08a8655\" (UID: \"7bdb5d29-5a4c-4358-a276-58efd08a8655\") " Feb 28 04:40:24 crc kubenswrapper[5014]: I0228 04:40:24.576068 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvjgv\" (UniqueName: \"kubernetes.io/projected/bd99ec6a-5237-42f9-81ad-bd813d262c6d-kube-api-access-nvjgv\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:24 crc kubenswrapper[5014]: I0228 04:40:24.576085 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd99ec6a-5237-42f9-81ad-bd813d262c6d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:24 crc kubenswrapper[5014]: I0228 04:40:24.577178 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bdb5d29-5a4c-4358-a276-58efd08a8655-utilities" (OuterVolumeSpecName: "utilities") pod "7bdb5d29-5a4c-4358-a276-58efd08a8655" (UID: "7bdb5d29-5a4c-4358-a276-58efd08a8655"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:40:24 crc kubenswrapper[5014]: I0228 04:40:24.579261 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bdb5d29-5a4c-4358-a276-58efd08a8655-kube-api-access-rqtc5" (OuterVolumeSpecName: "kube-api-access-rqtc5") pod "7bdb5d29-5a4c-4358-a276-58efd08a8655" (UID: "7bdb5d29-5a4c-4358-a276-58efd08a8655"). InnerVolumeSpecName "kube-api-access-rqtc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:40:24 crc kubenswrapper[5014]: I0228 04:40:24.641876 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bdb5d29-5a4c-4358-a276-58efd08a8655-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7bdb5d29-5a4c-4358-a276-58efd08a8655" (UID: "7bdb5d29-5a4c-4358-a276-58efd08a8655"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:40:24 crc kubenswrapper[5014]: I0228 04:40:24.678869 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bdb5d29-5a4c-4358-a276-58efd08a8655-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:24 crc kubenswrapper[5014]: I0228 04:40:24.679410 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bdb5d29-5a4c-4358-a276-58efd08a8655-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:24 crc kubenswrapper[5014]: I0228 04:40:24.679427 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqtc5\" (UniqueName: \"kubernetes.io/projected/7bdb5d29-5a4c-4358-a276-58efd08a8655-kube-api-access-rqtc5\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:25 crc kubenswrapper[5014]: I0228 04:40:25.046594 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kx627" Feb 28 04:40:25 crc kubenswrapper[5014]: I0228 04:40:25.046569 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kx627" event={"ID":"bd99ec6a-5237-42f9-81ad-bd813d262c6d","Type":"ContainerDied","Data":"8a949a74e97dc77d95dfd47164d840b7c3750f8d095374a4a90fbd8d574e3a23"} Feb 28 04:40:25 crc kubenswrapper[5014]: I0228 04:40:25.048011 5014 scope.go:117] "RemoveContainer" containerID="aa0668cb3d249afac3762bcf32c8f92bde7a8a061c5114c229821ab7ce4beb62" Feb 28 04:40:25 crc kubenswrapper[5014]: I0228 04:40:25.050692 5014 generic.go:334] "Generic (PLEG): container finished" podID="7bdb5d29-5a4c-4358-a276-58efd08a8655" containerID="c0cc124af7da90090bb62569c5a149cbecfaceeadf8e6ff8397672a358fc9852" exitCode=0 Feb 28 04:40:25 crc kubenswrapper[5014]: I0228 04:40:25.050735 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5h8g" event={"ID":"7bdb5d29-5a4c-4358-a276-58efd08a8655","Type":"ContainerDied","Data":"c0cc124af7da90090bb62569c5a149cbecfaceeadf8e6ff8397672a358fc9852"} Feb 28 04:40:25 crc kubenswrapper[5014]: I0228 04:40:25.050767 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r5h8g" event={"ID":"7bdb5d29-5a4c-4358-a276-58efd08a8655","Type":"ContainerDied","Data":"9f2ff58fccc402d215407db9b6b4cc257fab3bd62fe055aa17cd8eafe6508e6d"} Feb 28 04:40:25 crc kubenswrapper[5014]: I0228 04:40:25.050966 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r5h8g" Feb 28 04:40:25 crc kubenswrapper[5014]: I0228 04:40:25.075184 5014 scope.go:117] "RemoveContainer" containerID="c591e2f67ebe687cbde7f79f26f7a1406e65e81028d8cea464f4c9e79fc553e7" Feb 28 04:40:25 crc kubenswrapper[5014]: I0228 04:40:25.101175 5014 scope.go:117] "RemoveContainer" containerID="c84ecce3f7b84faaf2e273cce8aca402b65f0e5b1c5afc3b968a367f67a39184" Feb 28 04:40:25 crc kubenswrapper[5014]: I0228 04:40:25.134465 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r5h8g"] Feb 28 04:40:25 crc kubenswrapper[5014]: I0228 04:40:25.141158 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r5h8g"] Feb 28 04:40:25 crc kubenswrapper[5014]: I0228 04:40:25.144352 5014 scope.go:117] "RemoveContainer" containerID="c0cc124af7da90090bb62569c5a149cbecfaceeadf8e6ff8397672a358fc9852" Feb 28 04:40:25 crc kubenswrapper[5014]: I0228 04:40:25.145960 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kx627"] Feb 28 04:40:25 crc kubenswrapper[5014]: I0228 04:40:25.149055 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kx627"] Feb 28 04:40:25 crc kubenswrapper[5014]: I0228 04:40:25.167339 5014 scope.go:117] "RemoveContainer" containerID="8b6782a79650a24ca2f71158a12302b475127f23b5ff3ccdd142cf0f7240217b" Feb 28 04:40:25 crc kubenswrapper[5014]: I0228 04:40:25.192557 5014 scope.go:117] "RemoveContainer" containerID="635d7b700cf15c02191534cb5876fc76185e4ccb324da66c05d6eca14ecc191b" Feb 28 04:40:25 crc kubenswrapper[5014]: I0228 04:40:25.216320 5014 scope.go:117] "RemoveContainer" containerID="c0cc124af7da90090bb62569c5a149cbecfaceeadf8e6ff8397672a358fc9852" Feb 28 04:40:25 crc kubenswrapper[5014]: E0228 04:40:25.217143 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0cc124af7da90090bb62569c5a149cbecfaceeadf8e6ff8397672a358fc9852\": container with ID starting with c0cc124af7da90090bb62569c5a149cbecfaceeadf8e6ff8397672a358fc9852 not found: ID does not exist" containerID="c0cc124af7da90090bb62569c5a149cbecfaceeadf8e6ff8397672a358fc9852" Feb 28 04:40:25 crc kubenswrapper[5014]: I0228 04:40:25.217181 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0cc124af7da90090bb62569c5a149cbecfaceeadf8e6ff8397672a358fc9852"} err="failed to get container status \"c0cc124af7da90090bb62569c5a149cbecfaceeadf8e6ff8397672a358fc9852\": rpc error: code = NotFound desc = could not find container \"c0cc124af7da90090bb62569c5a149cbecfaceeadf8e6ff8397672a358fc9852\": container with ID starting with c0cc124af7da90090bb62569c5a149cbecfaceeadf8e6ff8397672a358fc9852 not found: ID does not exist" Feb 28 04:40:25 crc kubenswrapper[5014]: I0228 04:40:25.217205 5014 scope.go:117] "RemoveContainer" containerID="8b6782a79650a24ca2f71158a12302b475127f23b5ff3ccdd142cf0f7240217b" Feb 28 04:40:25 crc kubenswrapper[5014]: E0228 04:40:25.218110 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b6782a79650a24ca2f71158a12302b475127f23b5ff3ccdd142cf0f7240217b\": container with ID starting with 8b6782a79650a24ca2f71158a12302b475127f23b5ff3ccdd142cf0f7240217b not found: ID does not exist" containerID="8b6782a79650a24ca2f71158a12302b475127f23b5ff3ccdd142cf0f7240217b" Feb 28 04:40:25 crc kubenswrapper[5014]: I0228 04:40:25.218170 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b6782a79650a24ca2f71158a12302b475127f23b5ff3ccdd142cf0f7240217b"} err="failed to get container status \"8b6782a79650a24ca2f71158a12302b475127f23b5ff3ccdd142cf0f7240217b\": rpc error: code = NotFound desc = could not find container \"8b6782a79650a24ca2f71158a12302b475127f23b5ff3ccdd142cf0f7240217b\": container with ID starting with 8b6782a79650a24ca2f71158a12302b475127f23b5ff3ccdd142cf0f7240217b not found: ID does not exist" Feb 28 04:40:25 crc kubenswrapper[5014]: I0228 04:40:25.218214 5014 scope.go:117] "RemoveContainer" containerID="635d7b700cf15c02191534cb5876fc76185e4ccb324da66c05d6eca14ecc191b" Feb 28 04:40:25 crc kubenswrapper[5014]: E0228 04:40:25.218965 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"635d7b700cf15c02191534cb5876fc76185e4ccb324da66c05d6eca14ecc191b\": container with ID starting with 635d7b700cf15c02191534cb5876fc76185e4ccb324da66c05d6eca14ecc191b not found: ID does not exist" containerID="635d7b700cf15c02191534cb5876fc76185e4ccb324da66c05d6eca14ecc191b" Feb 28 04:40:25 crc kubenswrapper[5014]: I0228 04:40:25.218992 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"635d7b700cf15c02191534cb5876fc76185e4ccb324da66c05d6eca14ecc191b"} err="failed to get container status \"635d7b700cf15c02191534cb5876fc76185e4ccb324da66c05d6eca14ecc191b\": rpc error: code = NotFound desc = could not find container \"635d7b700cf15c02191534cb5876fc76185e4ccb324da66c05d6eca14ecc191b\": container with ID starting with 635d7b700cf15c02191534cb5876fc76185e4ccb324da66c05d6eca14ecc191b not found: ID does not exist" Feb 28 04:40:26 crc kubenswrapper[5014]: I0228 04:40:26.179149 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" path="/var/lib/kubelet/pods/7bdb5d29-5a4c-4358-a276-58efd08a8655/volumes" Feb 28 04:40:26 crc kubenswrapper[5014]: I0228 04:40:26.180087 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" path="/var/lib/kubelet/pods/bd99ec6a-5237-42f9-81ad-bd813d262c6d/volumes" Feb 28 04:40:26 crc kubenswrapper[5014]: I0228 04:40:26.448345 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hx7qb"] Feb 28 04:40:26 crc kubenswrapper[5014]: I0228 04:40:26.448631 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hx7qb" podUID="3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4" containerName="registry-server" containerID="cri-o://7b31560e4b1aeb695a44ad469fe6f499752151480fcabdf3ec5b0d5247168adc" gracePeriod=2 Feb 28 04:40:26 crc kubenswrapper[5014]: I0228 04:40:26.867662 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hx7qb" Feb 28 04:40:26 crc kubenswrapper[5014]: I0228 04:40:26.907792 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4-catalog-content\") pod \"3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4\" (UID: \"3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4\") " Feb 28 04:40:26 crc kubenswrapper[5014]: I0228 04:40:26.908179 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6q5qx\" (UniqueName: \"kubernetes.io/projected/3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4-kube-api-access-6q5qx\") pod \"3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4\" (UID: \"3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4\") " Feb 28 04:40:26 crc kubenswrapper[5014]: I0228 04:40:26.908269 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4-utilities\") pod \"3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4\" (UID: \"3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4\") " Feb 28 04:40:26 crc kubenswrapper[5014]: I0228 04:40:26.912355 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4-utilities" (OuterVolumeSpecName: "utilities") pod "3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4" (UID: "3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:40:26 crc kubenswrapper[5014]: I0228 04:40:26.918888 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4-kube-api-access-6q5qx" (OuterVolumeSpecName: "kube-api-access-6q5qx") pod "3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4" (UID: "3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4"). InnerVolumeSpecName "kube-api-access-6q5qx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:40:27 crc kubenswrapper[5014]: I0228 04:40:27.011158 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6q5qx\" (UniqueName: \"kubernetes.io/projected/3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4-kube-api-access-6q5qx\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:27 crc kubenswrapper[5014]: I0228 04:40:27.011188 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:27 crc kubenswrapper[5014]: I0228 04:40:27.064903 5014 generic.go:334] "Generic (PLEG): container finished" podID="3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4" containerID="7b31560e4b1aeb695a44ad469fe6f499752151480fcabdf3ec5b0d5247168adc" exitCode=0 Feb 28 04:40:27 crc kubenswrapper[5014]: I0228 04:40:27.064942 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hx7qb" event={"ID":"3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4","Type":"ContainerDied","Data":"7b31560e4b1aeb695a44ad469fe6f499752151480fcabdf3ec5b0d5247168adc"} Feb 28 04:40:27 crc kubenswrapper[5014]: I0228 04:40:27.064967 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hx7qb" event={"ID":"3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4","Type":"ContainerDied","Data":"65c0f0a2899c1b046011e4457c5aa3352a06e206a9a8a12b5b8f747cff62ba73"} Feb 28 04:40:27 crc kubenswrapper[5014]: I0228 04:40:27.064986 5014 scope.go:117] "RemoveContainer" containerID="7b31560e4b1aeb695a44ad469fe6f499752151480fcabdf3ec5b0d5247168adc" Feb 28 04:40:27 crc kubenswrapper[5014]: I0228 04:40:27.065075 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hx7qb" Feb 28 04:40:27 crc kubenswrapper[5014]: I0228 04:40:27.071330 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4" (UID: "3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:40:27 crc kubenswrapper[5014]: I0228 04:40:27.078860 5014 scope.go:117] "RemoveContainer" containerID="de7f0fee352c5ca0fdc4a3b43e19a807f1a9b4d221f37fc480852272ee20eb91" Feb 28 04:40:27 crc kubenswrapper[5014]: I0228 04:40:27.092128 5014 scope.go:117] "RemoveContainer" containerID="84a3abd3ab4bb703e7973b0142c129a4022fe7222a1d73ed97112ff58f993a49" Feb 28 04:40:27 crc kubenswrapper[5014]: I0228 04:40:27.108231 5014 scope.go:117] "RemoveContainer" containerID="7b31560e4b1aeb695a44ad469fe6f499752151480fcabdf3ec5b0d5247168adc" Feb 28 04:40:27 crc kubenswrapper[5014]: E0228 04:40:27.108679 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b31560e4b1aeb695a44ad469fe6f499752151480fcabdf3ec5b0d5247168adc\": container with ID starting with 7b31560e4b1aeb695a44ad469fe6f499752151480fcabdf3ec5b0d5247168adc not found: ID does not exist" containerID="7b31560e4b1aeb695a44ad469fe6f499752151480fcabdf3ec5b0d5247168adc" Feb 28 04:40:27 crc kubenswrapper[5014]: I0228 04:40:27.108714 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b31560e4b1aeb695a44ad469fe6f499752151480fcabdf3ec5b0d5247168adc"} err="failed to get container status \"7b31560e4b1aeb695a44ad469fe6f499752151480fcabdf3ec5b0d5247168adc\": rpc error: code = NotFound desc = could not find container \"7b31560e4b1aeb695a44ad469fe6f499752151480fcabdf3ec5b0d5247168adc\": container with ID starting with 7b31560e4b1aeb695a44ad469fe6f499752151480fcabdf3ec5b0d5247168adc not found: ID does not exist" Feb 28 04:40:27 crc kubenswrapper[5014]: I0228 04:40:27.108733 5014 scope.go:117] "RemoveContainer" containerID="de7f0fee352c5ca0fdc4a3b43e19a807f1a9b4d221f37fc480852272ee20eb91" Feb 28 04:40:27 crc kubenswrapper[5014]: E0228 04:40:27.109089 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de7f0fee352c5ca0fdc4a3b43e19a807f1a9b4d221f37fc480852272ee20eb91\": container with ID starting with de7f0fee352c5ca0fdc4a3b43e19a807f1a9b4d221f37fc480852272ee20eb91 not found: ID does not exist" containerID="de7f0fee352c5ca0fdc4a3b43e19a807f1a9b4d221f37fc480852272ee20eb91" Feb 28 04:40:27 crc kubenswrapper[5014]: I0228 04:40:27.109108 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de7f0fee352c5ca0fdc4a3b43e19a807f1a9b4d221f37fc480852272ee20eb91"} err="failed to get container status \"de7f0fee352c5ca0fdc4a3b43e19a807f1a9b4d221f37fc480852272ee20eb91\": rpc error: code = NotFound desc = could not find container \"de7f0fee352c5ca0fdc4a3b43e19a807f1a9b4d221f37fc480852272ee20eb91\": container with ID starting with de7f0fee352c5ca0fdc4a3b43e19a807f1a9b4d221f37fc480852272ee20eb91 not found: ID does not exist" Feb 28 04:40:27 crc kubenswrapper[5014]: I0228 04:40:27.109121 5014 scope.go:117] "RemoveContainer" containerID="84a3abd3ab4bb703e7973b0142c129a4022fe7222a1d73ed97112ff58f993a49" Feb 28 04:40:27 crc kubenswrapper[5014]: E0228 04:40:27.109451 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84a3abd3ab4bb703e7973b0142c129a4022fe7222a1d73ed97112ff58f993a49\": container with ID starting with 84a3abd3ab4bb703e7973b0142c129a4022fe7222a1d73ed97112ff58f993a49 not found: ID does not exist" containerID="84a3abd3ab4bb703e7973b0142c129a4022fe7222a1d73ed97112ff58f993a49" Feb 28 04:40:27 crc kubenswrapper[5014]: I0228 04:40:27.109468 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84a3abd3ab4bb703e7973b0142c129a4022fe7222a1d73ed97112ff58f993a49"} err="failed to get container status \"84a3abd3ab4bb703e7973b0142c129a4022fe7222a1d73ed97112ff58f993a49\": rpc error: code = NotFound desc = could not find container \"84a3abd3ab4bb703e7973b0142c129a4022fe7222a1d73ed97112ff58f993a49\": container with ID starting with 84a3abd3ab4bb703e7973b0142c129a4022fe7222a1d73ed97112ff58f993a49 not found: ID does not exist" Feb 28 04:40:27 crc kubenswrapper[5014]: I0228 04:40:27.112095 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:27 crc kubenswrapper[5014]: I0228 04:40:27.404350 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hx7qb"] Feb 28 04:40:27 crc kubenswrapper[5014]: I0228 04:40:27.418142 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hx7qb"] Feb 28 04:40:28 crc kubenswrapper[5014]: I0228 04:40:28.183301 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4" path="/var/lib/kubelet/pods/3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4/volumes" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.735048 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k459l"] Feb 28 04:40:34 crc kubenswrapper[5014]: E0228 04:40:34.735645 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" containerName="extract-content" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.735656 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" containerName="extract-content" Feb 28 04:40:34 crc kubenswrapper[5014]: E0228 04:40:34.735668 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" containerName="extract-content" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.735674 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" containerName="extract-content" Feb 28 04:40:34 crc kubenswrapper[5014]: E0228 04:40:34.735682 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4" containerName="extract-content" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.735689 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4" containerName="extract-content" Feb 28 04:40:34 crc kubenswrapper[5014]: E0228 04:40:34.735697 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4" containerName="extract-utilities" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.735703 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4" containerName="extract-utilities" Feb 28 04:40:34 crc kubenswrapper[5014]: E0228 04:40:34.735715 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" containerName="registry-server" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.735721 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" containerName="registry-server" Feb 28 04:40:34 crc kubenswrapper[5014]: E0228 04:40:34.735729 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" containerName="registry-server" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.735734 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" containerName="registry-server" Feb 28 04:40:34 crc kubenswrapper[5014]: E0228 04:40:34.735742 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" containerName="extract-utilities" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.735749 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" containerName="extract-utilities" Feb 28 04:40:34 crc kubenswrapper[5014]: E0228 04:40:34.735758 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" containerName="extract-utilities" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.735764 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" containerName="extract-utilities" Feb 28 04:40:34 crc kubenswrapper[5014]: E0228 04:40:34.735775 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4" containerName="registry-server" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.735781 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4" containerName="registry-server" Feb 28 04:40:34 crc kubenswrapper[5014]: E0228 04:40:34.735789 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56bd259d-1322-4f57-aa09-1384b22a54a9" containerName="oc" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.735795 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="56bd259d-1322-4f57-aa09-1384b22a54a9" containerName="oc" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.735898 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd99ec6a-5237-42f9-81ad-bd813d262c6d" containerName="registry-server" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.735908 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bdb5d29-5a4c-4358-a276-58efd08a8655" containerName="registry-server" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.735920 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d8c5d1a-7a21-4979-94f3-f87d6f63b7b4" containerName="registry-server" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.735930 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="56bd259d-1322-4f57-aa09-1384b22a54a9" containerName="oc" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.736279 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.752112 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k459l"] Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.810762 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75dc57be-2a73-4bdf-bd38-b466e9cdaf7a-registry-certificates\") pod \"image-registry-66df7c8f76-k459l\" (UID: \"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.810838 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75dc57be-2a73-4bdf-bd38-b466e9cdaf7a-trusted-ca\") pod \"image-registry-66df7c8f76-k459l\" (UID: \"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.810856 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75dc57be-2a73-4bdf-bd38-b466e9cdaf7a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k459l\" (UID: \"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.810907 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75dc57be-2a73-4bdf-bd38-b466e9cdaf7a-registry-tls\") pod \"image-registry-66df7c8f76-k459l\" (UID: \"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.810957 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-k459l\" (UID: \"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.810975 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75dc57be-2a73-4bdf-bd38-b466e9cdaf7a-bound-sa-token\") pod \"image-registry-66df7c8f76-k459l\" (UID: \"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.811019 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75dc57be-2a73-4bdf-bd38-b466e9cdaf7a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k459l\" (UID: \"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.811075 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2jg6\" (UniqueName: \"kubernetes.io/projected/75dc57be-2a73-4bdf-bd38-b466e9cdaf7a-kube-api-access-l2jg6\") pod \"image-registry-66df7c8f76-k459l\" (UID: \"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.833770 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-k459l\" (UID: \"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.911726 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75dc57be-2a73-4bdf-bd38-b466e9cdaf7a-registry-tls\") pod \"image-registry-66df7c8f76-k459l\" (UID: \"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.911994 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75dc57be-2a73-4bdf-bd38-b466e9cdaf7a-bound-sa-token\") pod \"image-registry-66df7c8f76-k459l\" (UID: \"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.912026 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75dc57be-2a73-4bdf-bd38-b466e9cdaf7a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k459l\" (UID: \"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.912073 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2jg6\" (UniqueName: \"kubernetes.io/projected/75dc57be-2a73-4bdf-bd38-b466e9cdaf7a-kube-api-access-l2jg6\") pod \"image-registry-66df7c8f76-k459l\" (UID: \"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.912094 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75dc57be-2a73-4bdf-bd38-b466e9cdaf7a-trusted-ca\") pod \"image-registry-66df7c8f76-k459l\" (UID: \"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.912112 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75dc57be-2a73-4bdf-bd38-b466e9cdaf7a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k459l\" (UID: \"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.912130 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75dc57be-2a73-4bdf-bd38-b466e9cdaf7a-registry-certificates\") pod \"image-registry-66df7c8f76-k459l\" (UID: \"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.913152 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/75dc57be-2a73-4bdf-bd38-b466e9cdaf7a-registry-certificates\") pod \"image-registry-66df7c8f76-k459l\" (UID: \"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.913986 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/75dc57be-2a73-4bdf-bd38-b466e9cdaf7a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-k459l\" (UID: \"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.914787 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75dc57be-2a73-4bdf-bd38-b466e9cdaf7a-trusted-ca\") pod \"image-registry-66df7c8f76-k459l\" (UID: \"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.922582 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/75dc57be-2a73-4bdf-bd38-b466e9cdaf7a-registry-tls\") pod \"image-registry-66df7c8f76-k459l\" (UID: \"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.923446 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/75dc57be-2a73-4bdf-bd38-b466e9cdaf7a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-k459l\" (UID: \"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.936572 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2jg6\" (UniqueName: \"kubernetes.io/projected/75dc57be-2a73-4bdf-bd38-b466e9cdaf7a-kube-api-access-l2jg6\") pod \"image-registry-66df7c8f76-k459l\" (UID: \"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:34 crc kubenswrapper[5014]: I0228 04:40:34.956340 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75dc57be-2a73-4bdf-bd38-b466e9cdaf7a-bound-sa-token\") pod \"image-registry-66df7c8f76-k459l\" (UID: \"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a\") " pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:35 crc kubenswrapper[5014]: I0228 04:40:35.060627 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:35 crc kubenswrapper[5014]: I0228 04:40:35.345778 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-k459l"] Feb 28 04:40:36 crc kubenswrapper[5014]: I0228 04:40:36.127214 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-k459l" event={"ID":"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a","Type":"ContainerStarted","Data":"0ddf7428903d4aee5b09e73e7162105750543da5c375fc89d53c3b8610aa7316"} Feb 28 04:40:36 crc kubenswrapper[5014]: I0228 04:40:36.127617 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-k459l" event={"ID":"75dc57be-2a73-4bdf-bd38-b466e9cdaf7a","Type":"ContainerStarted","Data":"10945f747bef4f96a3c12140d7de1399b43279fbbc713400ce4a0aa89780630d"} Feb 28 04:40:36 crc kubenswrapper[5014]: I0228 04:40:36.127642 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:36 crc kubenswrapper[5014]: I0228 04:40:36.145751 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-k459l" podStartSLOduration=2.1457314419999998 podStartE2EDuration="2.145731442s" podCreationTimestamp="2026-02-28 04:40:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:40:36.143604991 +0000 UTC m=+424.813730921" watchObservedRunningTime="2026-02-28 04:40:36.145731442 +0000 UTC m=+424.815857352" Feb 28 04:40:41 crc kubenswrapper[5014]: I0228 04:40:41.942037 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sqfvs"] Feb 28 04:40:41 crc kubenswrapper[5014]: I0228 04:40:41.943369 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sqfvs" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" containerName="registry-server" containerID="cri-o://27cbe99ab4658bfe6b52aac789ba02457379a32f74bf13136730c5b0c69a0f4e" gracePeriod=30 Feb 28 04:40:41 crc kubenswrapper[5014]: I0228 04:40:41.957079 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9cznf"] Feb 28 04:40:41 crc kubenswrapper[5014]: I0228 04:40:41.957697 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9cznf" podUID="52079806-fc0c-4852-8150-0123d376c1b2" containerName="registry-server" containerID="cri-o://d581abc1b7c171ea12adfd3289725c060f4ce47ee40d93f232591cc0e173df7a" gracePeriod=30 Feb 28 04:40:41 crc kubenswrapper[5014]: I0228 04:40:41.963796 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wxczw"] Feb 28 04:40:41 crc kubenswrapper[5014]: I0228 04:40:41.964138 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-wxczw" podUID="b60b7614-e66f-4184-b1ff-10fb0ba1ed31" containerName="marketplace-operator" containerID="cri-o://36b6894dfff18f968ac331dbaf2d9dcd27119fcfaad1529df1d59395f320b824" gracePeriod=30 Feb 28 04:40:41 crc kubenswrapper[5014]: I0228 04:40:41.971749 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-npdf6"] Feb 28 04:40:41 crc kubenswrapper[5014]: I0228 04:40:41.972125 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-npdf6" podUID="8a00f74f-e858-42cc-b882-492afd45684d" containerName="registry-server" containerID="cri-o://4351bfe2acee3deca8041e42244892f1d8d53660d71f470076df4c370278e406" gracePeriod=30 Feb 28 04:40:41 crc kubenswrapper[5014]: I0228 04:40:41.986265 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lz2dz"] Feb 28 04:40:41 crc kubenswrapper[5014]: I0228 04:40:41.991960 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lz2dz" Feb 28 04:40:41 crc kubenswrapper[5014]: I0228 04:40:41.997360 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5zq82"] Feb 28 04:40:41 crc kubenswrapper[5014]: I0228 04:40:41.997650 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5zq82" podUID="bba9702f-9e04-46d4-9a98-92d5303383c4" containerName="registry-server" containerID="cri-o://8db62d0137fa23ba071b5293ea6547d3f10ce3906d4420ceae3adde607ddace5" gracePeriod=30 Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.003671 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lz2dz"] Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.011398 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/da5f8445-0b83-49d2-8255-21a4074cbf0b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lz2dz\" (UID: \"da5f8445-0b83-49d2-8255-21a4074cbf0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-lz2dz" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.020394 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tptxd\" (UniqueName: \"kubernetes.io/projected/da5f8445-0b83-49d2-8255-21a4074cbf0b-kube-api-access-tptxd\") pod \"marketplace-operator-79b997595-lz2dz\" (UID: \"da5f8445-0b83-49d2-8255-21a4074cbf0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-lz2dz" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.020706 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/da5f8445-0b83-49d2-8255-21a4074cbf0b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lz2dz\" (UID: \"da5f8445-0b83-49d2-8255-21a4074cbf0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-lz2dz" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.121546 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/da5f8445-0b83-49d2-8255-21a4074cbf0b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lz2dz\" (UID: \"da5f8445-0b83-49d2-8255-21a4074cbf0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-lz2dz" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.121711 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/da5f8445-0b83-49d2-8255-21a4074cbf0b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lz2dz\" (UID: \"da5f8445-0b83-49d2-8255-21a4074cbf0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-lz2dz" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.121742 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tptxd\" (UniqueName: \"kubernetes.io/projected/da5f8445-0b83-49d2-8255-21a4074cbf0b-kube-api-access-tptxd\") pod \"marketplace-operator-79b997595-lz2dz\" (UID: \"da5f8445-0b83-49d2-8255-21a4074cbf0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-lz2dz" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.122668 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/da5f8445-0b83-49d2-8255-21a4074cbf0b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lz2dz\" (UID: \"da5f8445-0b83-49d2-8255-21a4074cbf0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-lz2dz" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.133378 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/da5f8445-0b83-49d2-8255-21a4074cbf0b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lz2dz\" (UID: \"da5f8445-0b83-49d2-8255-21a4074cbf0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-lz2dz" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.138313 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tptxd\" (UniqueName: \"kubernetes.io/projected/da5f8445-0b83-49d2-8255-21a4074cbf0b-kube-api-access-tptxd\") pod \"marketplace-operator-79b997595-lz2dz\" (UID: \"da5f8445-0b83-49d2-8255-21a4074cbf0b\") " pod="openshift-marketplace/marketplace-operator-79b997595-lz2dz" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.187937 5014 generic.go:334] "Generic (PLEG): container finished" podID="50cf3400-fb73-4038-b616-2d3559aaf784" containerID="27cbe99ab4658bfe6b52aac789ba02457379a32f74bf13136730c5b0c69a0f4e" exitCode=0 Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.188009 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqfvs" event={"ID":"50cf3400-fb73-4038-b616-2d3559aaf784","Type":"ContainerDied","Data":"27cbe99ab4658bfe6b52aac789ba02457379a32f74bf13136730c5b0c69a0f4e"} Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.193193 5014 generic.go:334] "Generic (PLEG): container finished" podID="bba9702f-9e04-46d4-9a98-92d5303383c4" containerID="8db62d0137fa23ba071b5293ea6547d3f10ce3906d4420ceae3adde607ddace5" exitCode=0 Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.193262 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zq82" event={"ID":"bba9702f-9e04-46d4-9a98-92d5303383c4","Type":"ContainerDied","Data":"8db62d0137fa23ba071b5293ea6547d3f10ce3906d4420ceae3adde607ddace5"} Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.196025 5014 generic.go:334] "Generic (PLEG): container finished" podID="52079806-fc0c-4852-8150-0123d376c1b2" containerID="d581abc1b7c171ea12adfd3289725c060f4ce47ee40d93f232591cc0e173df7a" exitCode=0 Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.196061 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9cznf" event={"ID":"52079806-fc0c-4852-8150-0123d376c1b2","Type":"ContainerDied","Data":"d581abc1b7c171ea12adfd3289725c060f4ce47ee40d93f232591cc0e173df7a"} Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.197418 5014 generic.go:334] "Generic (PLEG): container finished" podID="8a00f74f-e858-42cc-b882-492afd45684d" containerID="4351bfe2acee3deca8041e42244892f1d8d53660d71f470076df4c370278e406" exitCode=0 Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.197462 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npdf6" event={"ID":"8a00f74f-e858-42cc-b882-492afd45684d","Type":"ContainerDied","Data":"4351bfe2acee3deca8041e42244892f1d8d53660d71f470076df4c370278e406"} Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.198705 5014 generic.go:334] "Generic (PLEG): container finished" podID="b60b7614-e66f-4184-b1ff-10fb0ba1ed31" containerID="36b6894dfff18f968ac331dbaf2d9dcd27119fcfaad1529df1d59395f320b824" exitCode=0 Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.198775 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wxczw" event={"ID":"b60b7614-e66f-4184-b1ff-10fb0ba1ed31","Type":"ContainerDied","Data":"36b6894dfff18f968ac331dbaf2d9dcd27119fcfaad1529df1d59395f320b824"} Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.321758 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lz2dz" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.455011 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npdf6" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.525719 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr7z5\" (UniqueName: \"kubernetes.io/projected/8a00f74f-e858-42cc-b882-492afd45684d-kube-api-access-tr7z5\") pod \"8a00f74f-e858-42cc-b882-492afd45684d\" (UID: \"8a00f74f-e858-42cc-b882-492afd45684d\") " Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.525908 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a00f74f-e858-42cc-b882-492afd45684d-utilities\") pod \"8a00f74f-e858-42cc-b882-492afd45684d\" (UID: \"8a00f74f-e858-42cc-b882-492afd45684d\") " Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.525977 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a00f74f-e858-42cc-b882-492afd45684d-catalog-content\") pod \"8a00f74f-e858-42cc-b882-492afd45684d\" (UID: \"8a00f74f-e858-42cc-b882-492afd45684d\") " Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.529319 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a00f74f-e858-42cc-b882-492afd45684d-utilities" (OuterVolumeSpecName: "utilities") pod "8a00f74f-e858-42cc-b882-492afd45684d" (UID: "8a00f74f-e858-42cc-b882-492afd45684d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.531088 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a00f74f-e858-42cc-b882-492afd45684d-kube-api-access-tr7z5" (OuterVolumeSpecName: "kube-api-access-tr7z5") pod "8a00f74f-e858-42cc-b882-492afd45684d" (UID: "8a00f74f-e858-42cc-b882-492afd45684d"). InnerVolumeSpecName "kube-api-access-tr7z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.545633 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wxczw" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.552767 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zq82" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.570974 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqfvs" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.581790 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9cznf" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.583155 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a00f74f-e858-42cc-b882-492afd45684d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a00f74f-e858-42cc-b882-492afd45684d" (UID: "8a00f74f-e858-42cc-b882-492afd45684d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.627647 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b60b7614-e66f-4184-b1ff-10fb0ba1ed31-marketplace-trusted-ca\") pod \"b60b7614-e66f-4184-b1ff-10fb0ba1ed31\" (UID: \"b60b7614-e66f-4184-b1ff-10fb0ba1ed31\") " Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.627733 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-725hc\" (UniqueName: \"kubernetes.io/projected/bba9702f-9e04-46d4-9a98-92d5303383c4-kube-api-access-725hc\") pod \"bba9702f-9e04-46d4-9a98-92d5303383c4\" (UID: \"bba9702f-9e04-46d4-9a98-92d5303383c4\") " Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.627762 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b60b7614-e66f-4184-b1ff-10fb0ba1ed31-marketplace-operator-metrics\") pod \"b60b7614-e66f-4184-b1ff-10fb0ba1ed31\" (UID: \"b60b7614-e66f-4184-b1ff-10fb0ba1ed31\") " Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.627793 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50cf3400-fb73-4038-b616-2d3559aaf784-utilities\") pod \"50cf3400-fb73-4038-b616-2d3559aaf784\" (UID: \"50cf3400-fb73-4038-b616-2d3559aaf784\") " Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.627847 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba9702f-9e04-46d4-9a98-92d5303383c4-utilities\") pod \"bba9702f-9e04-46d4-9a98-92d5303383c4\" (UID: \"bba9702f-9e04-46d4-9a98-92d5303383c4\") " Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.627872 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vckt4\" (UniqueName: \"kubernetes.io/projected/b60b7614-e66f-4184-b1ff-10fb0ba1ed31-kube-api-access-vckt4\") pod \"b60b7614-e66f-4184-b1ff-10fb0ba1ed31\" (UID: \"b60b7614-e66f-4184-b1ff-10fb0ba1ed31\") " Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.627894 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52079806-fc0c-4852-8150-0123d376c1b2-utilities\") pod \"52079806-fc0c-4852-8150-0123d376c1b2\" (UID: \"52079806-fc0c-4852-8150-0123d376c1b2\") " Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.627925 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba9702f-9e04-46d4-9a98-92d5303383c4-catalog-content\") pod \"bba9702f-9e04-46d4-9a98-92d5303383c4\" (UID: \"bba9702f-9e04-46d4-9a98-92d5303383c4\") " Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.627951 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50cf3400-fb73-4038-b616-2d3559aaf784-catalog-content\") pod \"50cf3400-fb73-4038-b616-2d3559aaf784\" (UID: \"50cf3400-fb73-4038-b616-2d3559aaf784\") " Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.627984 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52079806-fc0c-4852-8150-0123d376c1b2-catalog-content\") pod \"52079806-fc0c-4852-8150-0123d376c1b2\" (UID: \"52079806-fc0c-4852-8150-0123d376c1b2\") " Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.628016 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkdvt\" (UniqueName: \"kubernetes.io/projected/52079806-fc0c-4852-8150-0123d376c1b2-kube-api-access-qkdvt\") pod \"52079806-fc0c-4852-8150-0123d376c1b2\" (UID: \"52079806-fc0c-4852-8150-0123d376c1b2\") " Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.628045 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmkg5\" (UniqueName: \"kubernetes.io/projected/50cf3400-fb73-4038-b616-2d3559aaf784-kube-api-access-vmkg5\") pod \"50cf3400-fb73-4038-b616-2d3559aaf784\" (UID: \"50cf3400-fb73-4038-b616-2d3559aaf784\") " Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.629131 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a00f74f-e858-42cc-b882-492afd45684d-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.629156 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a00f74f-e858-42cc-b882-492afd45684d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.629170 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr7z5\" (UniqueName: \"kubernetes.io/projected/8a00f74f-e858-42cc-b882-492afd45684d-kube-api-access-tr7z5\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.629240 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52079806-fc0c-4852-8150-0123d376c1b2-utilities" (OuterVolumeSpecName: "utilities") pod "52079806-fc0c-4852-8150-0123d376c1b2" (UID: "52079806-fc0c-4852-8150-0123d376c1b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.630658 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b60b7614-e66f-4184-b1ff-10fb0ba1ed31-kube-api-access-vckt4" (OuterVolumeSpecName: "kube-api-access-vckt4") pod "b60b7614-e66f-4184-b1ff-10fb0ba1ed31" (UID: "b60b7614-e66f-4184-b1ff-10fb0ba1ed31"). InnerVolumeSpecName "kube-api-access-vckt4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.631693 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50cf3400-fb73-4038-b616-2d3559aaf784-kube-api-access-vmkg5" (OuterVolumeSpecName: "kube-api-access-vmkg5") pod "50cf3400-fb73-4038-b616-2d3559aaf784" (UID: "50cf3400-fb73-4038-b616-2d3559aaf784"). InnerVolumeSpecName "kube-api-access-vmkg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.631922 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50cf3400-fb73-4038-b616-2d3559aaf784-utilities" (OuterVolumeSpecName: "utilities") pod "50cf3400-fb73-4038-b616-2d3559aaf784" (UID: "50cf3400-fb73-4038-b616-2d3559aaf784"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.632461 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b60b7614-e66f-4184-b1ff-10fb0ba1ed31-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b60b7614-e66f-4184-b1ff-10fb0ba1ed31" (UID: "b60b7614-e66f-4184-b1ff-10fb0ba1ed31"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.632878 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b60b7614-e66f-4184-b1ff-10fb0ba1ed31-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b60b7614-e66f-4184-b1ff-10fb0ba1ed31" (UID: "b60b7614-e66f-4184-b1ff-10fb0ba1ed31"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.635052 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bba9702f-9e04-46d4-9a98-92d5303383c4-kube-api-access-725hc" (OuterVolumeSpecName: "kube-api-access-725hc") pod "bba9702f-9e04-46d4-9a98-92d5303383c4" (UID: "bba9702f-9e04-46d4-9a98-92d5303383c4"). InnerVolumeSpecName "kube-api-access-725hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.635186 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52079806-fc0c-4852-8150-0123d376c1b2-kube-api-access-qkdvt" (OuterVolumeSpecName: "kube-api-access-qkdvt") pod "52079806-fc0c-4852-8150-0123d376c1b2" (UID: "52079806-fc0c-4852-8150-0123d376c1b2"). InnerVolumeSpecName "kube-api-access-qkdvt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.635371 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bba9702f-9e04-46d4-9a98-92d5303383c4-utilities" (OuterVolumeSpecName: "utilities") pod "bba9702f-9e04-46d4-9a98-92d5303383c4" (UID: "bba9702f-9e04-46d4-9a98-92d5303383c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.702608 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50cf3400-fb73-4038-b616-2d3559aaf784-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50cf3400-fb73-4038-b616-2d3559aaf784" (UID: "50cf3400-fb73-4038-b616-2d3559aaf784"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.707621 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/52079806-fc0c-4852-8150-0123d376c1b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "52079806-fc0c-4852-8150-0123d376c1b2" (UID: "52079806-fc0c-4852-8150-0123d376c1b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.730930 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-725hc\" (UniqueName: \"kubernetes.io/projected/bba9702f-9e04-46d4-9a98-92d5303383c4-kube-api-access-725hc\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.730956 5014 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b60b7614-e66f-4184-b1ff-10fb0ba1ed31-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.730967 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50cf3400-fb73-4038-b616-2d3559aaf784-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.730978 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bba9702f-9e04-46d4-9a98-92d5303383c4-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.730986 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vckt4\" (UniqueName: \"kubernetes.io/projected/b60b7614-e66f-4184-b1ff-10fb0ba1ed31-kube-api-access-vckt4\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.730995 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/52079806-fc0c-4852-8150-0123d376c1b2-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.731003 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50cf3400-fb73-4038-b616-2d3559aaf784-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.731012 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/52079806-fc0c-4852-8150-0123d376c1b2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.731021 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkdvt\" (UniqueName: \"kubernetes.io/projected/52079806-fc0c-4852-8150-0123d376c1b2-kube-api-access-qkdvt\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.731029 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmkg5\" (UniqueName: \"kubernetes.io/projected/50cf3400-fb73-4038-b616-2d3559aaf784-kube-api-access-vmkg5\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.731036 5014 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b60b7614-e66f-4184-b1ff-10fb0ba1ed31-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.771434 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bba9702f-9e04-46d4-9a98-92d5303383c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bba9702f-9e04-46d4-9a98-92d5303383c4" (UID: "bba9702f-9e04-46d4-9a98-92d5303383c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.793151 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lz2dz"] Feb 28 04:40:42 crc kubenswrapper[5014]: W0228 04:40:42.797077 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda5f8445_0b83_49d2_8255_21a4074cbf0b.slice/crio-a0bdb1d24222a7d6e5e9ec1b55cf39b889cafdb301981b3bbf8fb30c40667e79 WatchSource:0}: Error finding container a0bdb1d24222a7d6e5e9ec1b55cf39b889cafdb301981b3bbf8fb30c40667e79: Status 404 returned error can't find the container with id a0bdb1d24222a7d6e5e9ec1b55cf39b889cafdb301981b3bbf8fb30c40667e79 Feb 28 04:40:42 crc kubenswrapper[5014]: I0228 04:40:42.833632 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bba9702f-9e04-46d4-9a98-92d5303383c4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.209105 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9cznf" event={"ID":"52079806-fc0c-4852-8150-0123d376c1b2","Type":"ContainerDied","Data":"029101b17b47ef3f330f6fe6a57689cf3e21070e70249db678506708b50cea87"} Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.209547 5014 scope.go:117] "RemoveContainer" containerID="d581abc1b7c171ea12adfd3289725c060f4ce47ee40d93f232591cc0e173df7a" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.209135 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9cznf" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.212030 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lz2dz" event={"ID":"da5f8445-0b83-49d2-8255-21a4074cbf0b","Type":"ContainerStarted","Data":"8c8c930e783e1692a9cca666b12ee676b086f4dd895d74863d39d71c9fbce17d"} Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.212086 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lz2dz" event={"ID":"da5f8445-0b83-49d2-8255-21a4074cbf0b","Type":"ContainerStarted","Data":"a0bdb1d24222a7d6e5e9ec1b55cf39b889cafdb301981b3bbf8fb30c40667e79"} Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.213078 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lz2dz" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.216639 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lz2dz" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.218731 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-npdf6" event={"ID":"8a00f74f-e858-42cc-b882-492afd45684d","Type":"ContainerDied","Data":"7d89f646c6b2e1506360c194ff40f4142b8bf5415f10b823f60d7897fb86e1c4"} Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.218881 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-npdf6" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.223414 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wxczw" event={"ID":"b60b7614-e66f-4184-b1ff-10fb0ba1ed31","Type":"ContainerDied","Data":"e52d8d9591447f9328694775ada7b2bbe1b9c8efac8fdff6a859c7eac51d0af8"} Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.223481 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wxczw" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.225647 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqfvs" event={"ID":"50cf3400-fb73-4038-b616-2d3559aaf784","Type":"ContainerDied","Data":"30ba94d403b5324b549cabf95316902b5413cb95ee203dc6325869fceef711ac"} Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.225671 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqfvs" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.226789 5014 scope.go:117] "RemoveContainer" containerID="3bd97587235e7e11d8b5a8594f80bb2b49ffc96e41504ac06bdb983c8ce07d1d" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.229953 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5zq82" event={"ID":"bba9702f-9e04-46d4-9a98-92d5303383c4","Type":"ContainerDied","Data":"33d54e0fd535d7b84533ec8eaf3fd608cf7cf5a85c89fdb6273a18b0308dab02"} Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.230097 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5zq82" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.246214 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-lz2dz" podStartSLOduration=2.246106314 podStartE2EDuration="2.246106314s" podCreationTimestamp="2026-02-28 04:40:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:40:43.232495044 +0000 UTC m=+431.902620994" watchObservedRunningTime="2026-02-28 04:40:43.246106314 +0000 UTC m=+431.916232244" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.254474 5014 scope.go:117] "RemoveContainer" containerID="3887c3314de07d5bc5a02a84043f4e0063c5a18cc9918fcc61bcbd542efb30b9" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.277766 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9cznf"] Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.281750 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9cznf"] Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.296788 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sqfvs"] Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.302881 5014 scope.go:117] "RemoveContainer" containerID="4351bfe2acee3deca8041e42244892f1d8d53660d71f470076df4c370278e406" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.304151 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sqfvs"] Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.311614 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-npdf6"] Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.319765 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-npdf6"] Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.325752 5014 scope.go:117] "RemoveContainer" containerID="533d9b88cb01043fe2246d71f096ac074cbbd32a000590d3da5a19183cc335c4" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.333987 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wxczw"] Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.347670 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wxczw"] Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.355969 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5zq82"] Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.359364 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5zq82"] Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.360909 5014 scope.go:117] "RemoveContainer" containerID="78a1f9fe3660c5e4df91192578063f1d3478cf19cd86a2a827284a8bb11b40fe" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.373185 5014 scope.go:117] "RemoveContainer" containerID="36b6894dfff18f968ac331dbaf2d9dcd27119fcfaad1529df1d59395f320b824" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.389912 5014 scope.go:117] "RemoveContainer" containerID="27cbe99ab4658bfe6b52aac789ba02457379a32f74bf13136730c5b0c69a0f4e" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.400681 5014 scope.go:117] "RemoveContainer" containerID="29f2dfffbf0470555eb7b9ebf16d9b06b5ecfe15b1a7e425c9b02dc66dce62ed" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.414667 5014 scope.go:117] "RemoveContainer" containerID="c58870833a4b295eea0a120f2f28e7b596e36a9798e90855009cca95fe301cae" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.433968 5014 scope.go:117] "RemoveContainer" containerID="8db62d0137fa23ba071b5293ea6547d3f10ce3906d4420ceae3adde607ddace5" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.457238 5014 scope.go:117] "RemoveContainer" containerID="efba7c4f0f824ce32d4eeb841869734789a5f143768940962c467dfd5ca7984a" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.473759 5014 scope.go:117] "RemoveContainer" containerID="a5c70c1addd5fd7d86bdc3ae5cecdcff87614886c9d9a3aff217b654a05fa6a9" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.754862 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-kc599"] Feb 28 04:40:43 crc kubenswrapper[5014]: E0228 04:40:43.755222 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bba9702f-9e04-46d4-9a98-92d5303383c4" containerName="extract-utilities" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.755244 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="bba9702f-9e04-46d4-9a98-92d5303383c4" containerName="extract-utilities" Feb 28 04:40:43 crc kubenswrapper[5014]: E0228 04:40:43.755263 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" containerName="extract-utilities" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.755276 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" containerName="extract-utilities" Feb 28 04:40:43 crc kubenswrapper[5014]: E0228 04:40:43.755291 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a00f74f-e858-42cc-b882-492afd45684d" containerName="registry-server" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.755304 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a00f74f-e858-42cc-b882-492afd45684d" containerName="registry-server" Feb 28 04:40:43 crc kubenswrapper[5014]: E0228 04:40:43.755322 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bba9702f-9e04-46d4-9a98-92d5303383c4" containerName="registry-server" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.755336 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="bba9702f-9e04-46d4-9a98-92d5303383c4" containerName="registry-server" Feb 28 04:40:43 crc kubenswrapper[5014]: E0228 04:40:43.755352 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bba9702f-9e04-46d4-9a98-92d5303383c4" containerName="extract-content" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.755364 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="bba9702f-9e04-46d4-9a98-92d5303383c4" containerName="extract-content" Feb 28 04:40:43 crc kubenswrapper[5014]: E0228 04:40:43.755380 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52079806-fc0c-4852-8150-0123d376c1b2" containerName="extract-content" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.755393 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="52079806-fc0c-4852-8150-0123d376c1b2" containerName="extract-content" Feb 28 04:40:43 crc kubenswrapper[5014]: E0228 04:40:43.755411 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52079806-fc0c-4852-8150-0123d376c1b2" containerName="extract-utilities" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.755423 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="52079806-fc0c-4852-8150-0123d376c1b2" containerName="extract-utilities" Feb 28 04:40:43 crc kubenswrapper[5014]: E0228 04:40:43.755437 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" containerName="registry-server" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.755449 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" containerName="registry-server" Feb 28 04:40:43 crc kubenswrapper[5014]: E0228 04:40:43.755466 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" containerName="extract-content" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.755499 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" containerName="extract-content" Feb 28 04:40:43 crc kubenswrapper[5014]: E0228 04:40:43.755514 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a00f74f-e858-42cc-b882-492afd45684d" containerName="extract-content" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.755525 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a00f74f-e858-42cc-b882-492afd45684d" containerName="extract-content" Feb 28 04:40:43 crc kubenswrapper[5014]: E0228 04:40:43.755543 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b60b7614-e66f-4184-b1ff-10fb0ba1ed31" containerName="marketplace-operator" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.755554 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="b60b7614-e66f-4184-b1ff-10fb0ba1ed31" containerName="marketplace-operator" Feb 28 04:40:43 crc kubenswrapper[5014]: E0228 04:40:43.755567 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52079806-fc0c-4852-8150-0123d376c1b2" containerName="registry-server" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.755578 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="52079806-fc0c-4852-8150-0123d376c1b2" containerName="registry-server" Feb 28 04:40:43 crc kubenswrapper[5014]: E0228 04:40:43.755595 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a00f74f-e858-42cc-b882-492afd45684d" containerName="extract-utilities" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.755606 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a00f74f-e858-42cc-b882-492afd45684d" containerName="extract-utilities" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.755772 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="b60b7614-e66f-4184-b1ff-10fb0ba1ed31" containerName="marketplace-operator" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.755794 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="52079806-fc0c-4852-8150-0123d376c1b2" containerName="registry-server" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.755836 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="bba9702f-9e04-46d4-9a98-92d5303383c4" containerName="registry-server" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.755856 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" containerName="registry-server" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.755872 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a00f74f-e858-42cc-b882-492afd45684d" containerName="registry-server" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.757033 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kc599" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.760470 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.770973 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kc599"] Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.860069 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1286af62-b972-4b45-a18b-f7e0085a1a69-catalog-content\") pod \"certified-operators-kc599\" (UID: \"1286af62-b972-4b45-a18b-f7e0085a1a69\") " pod="openshift-marketplace/certified-operators-kc599" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.860175 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm494\" (UniqueName: \"kubernetes.io/projected/1286af62-b972-4b45-a18b-f7e0085a1a69-kube-api-access-mm494\") pod \"certified-operators-kc599\" (UID: \"1286af62-b972-4b45-a18b-f7e0085a1a69\") " pod="openshift-marketplace/certified-operators-kc599" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.860221 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1286af62-b972-4b45-a18b-f7e0085a1a69-utilities\") pod \"certified-operators-kc599\" (UID: \"1286af62-b972-4b45-a18b-f7e0085a1a69\") " pod="openshift-marketplace/certified-operators-kc599" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.961207 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1286af62-b972-4b45-a18b-f7e0085a1a69-catalog-content\") pod \"certified-operators-kc599\" (UID: \"1286af62-b972-4b45-a18b-f7e0085a1a69\") " pod="openshift-marketplace/certified-operators-kc599" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.961339 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mm494\" (UniqueName: \"kubernetes.io/projected/1286af62-b972-4b45-a18b-f7e0085a1a69-kube-api-access-mm494\") pod \"certified-operators-kc599\" (UID: \"1286af62-b972-4b45-a18b-f7e0085a1a69\") " pod="openshift-marketplace/certified-operators-kc599" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.961413 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1286af62-b972-4b45-a18b-f7e0085a1a69-utilities\") pod \"certified-operators-kc599\" (UID: \"1286af62-b972-4b45-a18b-f7e0085a1a69\") " pod="openshift-marketplace/certified-operators-kc599" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.962254 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1286af62-b972-4b45-a18b-f7e0085a1a69-utilities\") pod \"certified-operators-kc599\" (UID: \"1286af62-b972-4b45-a18b-f7e0085a1a69\") " pod="openshift-marketplace/certified-operators-kc599" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.962448 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1286af62-b972-4b45-a18b-f7e0085a1a69-catalog-content\") pod \"certified-operators-kc599\" (UID: \"1286af62-b972-4b45-a18b-f7e0085a1a69\") " pod="openshift-marketplace/certified-operators-kc599" Feb 28 04:40:43 crc kubenswrapper[5014]: I0228 04:40:43.984005 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mm494\" (UniqueName: \"kubernetes.io/projected/1286af62-b972-4b45-a18b-f7e0085a1a69-kube-api-access-mm494\") pod \"certified-operators-kc599\" (UID: \"1286af62-b972-4b45-a18b-f7e0085a1a69\") " pod="openshift-marketplace/certified-operators-kc599" Feb 28 04:40:44 crc kubenswrapper[5014]: I0228 04:40:44.127287 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-kc599" Feb 28 04:40:44 crc kubenswrapper[5014]: I0228 04:40:44.178940 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50cf3400-fb73-4038-b616-2d3559aaf784" path="/var/lib/kubelet/pods/50cf3400-fb73-4038-b616-2d3559aaf784/volumes" Feb 28 04:40:44 crc kubenswrapper[5014]: I0228 04:40:44.180175 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52079806-fc0c-4852-8150-0123d376c1b2" path="/var/lib/kubelet/pods/52079806-fc0c-4852-8150-0123d376c1b2/volumes" Feb 28 04:40:44 crc kubenswrapper[5014]: I0228 04:40:44.181308 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a00f74f-e858-42cc-b882-492afd45684d" path="/var/lib/kubelet/pods/8a00f74f-e858-42cc-b882-492afd45684d/volumes" Feb 28 04:40:44 crc kubenswrapper[5014]: I0228 04:40:44.183411 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b60b7614-e66f-4184-b1ff-10fb0ba1ed31" path="/var/lib/kubelet/pods/b60b7614-e66f-4184-b1ff-10fb0ba1ed31/volumes" Feb 28 04:40:44 crc kubenswrapper[5014]: I0228 04:40:44.184379 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bba9702f-9e04-46d4-9a98-92d5303383c4" path="/var/lib/kubelet/pods/bba9702f-9e04-46d4-9a98-92d5303383c4/volumes" Feb 28 04:40:44 crc kubenswrapper[5014]: I0228 04:40:44.562379 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-kc599"] Feb 28 04:40:44 crc kubenswrapper[5014]: W0228 04:40:44.568142 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1286af62_b972_4b45_a18b_f7e0085a1a69.slice/crio-fe87b0696a50dc721c383dd530d6f2d23139e00128033c5b2c55ae85b12a774c WatchSource:0}: Error finding container fe87b0696a50dc721c383dd530d6f2d23139e00128033c5b2c55ae85b12a774c: Status 404 returned error can't find the container with id fe87b0696a50dc721c383dd530d6f2d23139e00128033c5b2c55ae85b12a774c Feb 28 04:40:44 crc kubenswrapper[5014]: I0228 04:40:44.750271 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n92hm"] Feb 28 04:40:44 crc kubenswrapper[5014]: I0228 04:40:44.751445 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n92hm" Feb 28 04:40:44 crc kubenswrapper[5014]: I0228 04:40:44.753919 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 28 04:40:44 crc kubenswrapper[5014]: I0228 04:40:44.761202 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n92hm"] Feb 28 04:40:44 crc kubenswrapper[5014]: I0228 04:40:44.872092 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c78f8995-32df-4c90-9919-e5e6f53c16ed-utilities\") pod \"redhat-marketplace-n92hm\" (UID: \"c78f8995-32df-4c90-9919-e5e6f53c16ed\") " pod="openshift-marketplace/redhat-marketplace-n92hm" Feb 28 04:40:44 crc kubenswrapper[5014]: I0228 04:40:44.872549 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c78f8995-32df-4c90-9919-e5e6f53c16ed-catalog-content\") pod \"redhat-marketplace-n92hm\" (UID: \"c78f8995-32df-4c90-9919-e5e6f53c16ed\") " pod="openshift-marketplace/redhat-marketplace-n92hm" Feb 28 04:40:44 crc kubenswrapper[5014]: I0228 04:40:44.872594 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk7vh\" (UniqueName: \"kubernetes.io/projected/c78f8995-32df-4c90-9919-e5e6f53c16ed-kube-api-access-sk7vh\") pod \"redhat-marketplace-n92hm\" (UID: \"c78f8995-32df-4c90-9919-e5e6f53c16ed\") " pod="openshift-marketplace/redhat-marketplace-n92hm" Feb 28 04:40:44 crc kubenswrapper[5014]: I0228 04:40:44.974562 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c78f8995-32df-4c90-9919-e5e6f53c16ed-catalog-content\") pod \"redhat-marketplace-n92hm\" (UID: \"c78f8995-32df-4c90-9919-e5e6f53c16ed\") " pod="openshift-marketplace/redhat-marketplace-n92hm" Feb 28 04:40:44 crc kubenswrapper[5014]: I0228 04:40:44.974642 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk7vh\" (UniqueName: \"kubernetes.io/projected/c78f8995-32df-4c90-9919-e5e6f53c16ed-kube-api-access-sk7vh\") pod \"redhat-marketplace-n92hm\" (UID: \"c78f8995-32df-4c90-9919-e5e6f53c16ed\") " pod="openshift-marketplace/redhat-marketplace-n92hm" Feb 28 04:40:44 crc kubenswrapper[5014]: I0228 04:40:44.974759 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c78f8995-32df-4c90-9919-e5e6f53c16ed-utilities\") pod \"redhat-marketplace-n92hm\" (UID: \"c78f8995-32df-4c90-9919-e5e6f53c16ed\") " pod="openshift-marketplace/redhat-marketplace-n92hm" Feb 28 04:40:44 crc kubenswrapper[5014]: I0228 04:40:44.975280 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c78f8995-32df-4c90-9919-e5e6f53c16ed-utilities\") pod \"redhat-marketplace-n92hm\" (UID: \"c78f8995-32df-4c90-9919-e5e6f53c16ed\") " pod="openshift-marketplace/redhat-marketplace-n92hm" Feb 28 04:40:44 crc kubenswrapper[5014]: I0228 04:40:44.975539 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c78f8995-32df-4c90-9919-e5e6f53c16ed-catalog-content\") pod \"redhat-marketplace-n92hm\" (UID: \"c78f8995-32df-4c90-9919-e5e6f53c16ed\") " pod="openshift-marketplace/redhat-marketplace-n92hm" Feb 28 04:40:44 crc kubenswrapper[5014]: I0228 04:40:44.996704 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk7vh\" (UniqueName: \"kubernetes.io/projected/c78f8995-32df-4c90-9919-e5e6f53c16ed-kube-api-access-sk7vh\") pod \"redhat-marketplace-n92hm\" (UID: \"c78f8995-32df-4c90-9919-e5e6f53c16ed\") " pod="openshift-marketplace/redhat-marketplace-n92hm" Feb 28 04:40:45 crc kubenswrapper[5014]: I0228 04:40:45.096658 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n92hm" Feb 28 04:40:45 crc kubenswrapper[5014]: I0228 04:40:45.267453 5014 generic.go:334] "Generic (PLEG): container finished" podID="1286af62-b972-4b45-a18b-f7e0085a1a69" containerID="611069125f3f4cb6d2aad415664f946c887be18c821d398a1b9a3e5f2fee89f8" exitCode=0 Feb 28 04:40:45 crc kubenswrapper[5014]: I0228 04:40:45.267523 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kc599" event={"ID":"1286af62-b972-4b45-a18b-f7e0085a1a69","Type":"ContainerDied","Data":"611069125f3f4cb6d2aad415664f946c887be18c821d398a1b9a3e5f2fee89f8"} Feb 28 04:40:45 crc kubenswrapper[5014]: I0228 04:40:45.267574 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kc599" event={"ID":"1286af62-b972-4b45-a18b-f7e0085a1a69","Type":"ContainerStarted","Data":"fe87b0696a50dc721c383dd530d6f2d23139e00128033c5b2c55ae85b12a774c"} Feb 28 04:40:45 crc kubenswrapper[5014]: I0228 04:40:45.344779 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n92hm"] Feb 28 04:40:45 crc kubenswrapper[5014]: W0228 04:40:45.350991 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc78f8995_32df_4c90_9919_e5e6f53c16ed.slice/crio-db2137b8c1ca4cd9e5604916fbdc9835facb4b5f27726d9b62139f4afbb45c71 WatchSource:0}: Error finding container db2137b8c1ca4cd9e5604916fbdc9835facb4b5f27726d9b62139f4afbb45c71: Status 404 returned error can't find the container with id db2137b8c1ca4cd9e5604916fbdc9835facb4b5f27726d9b62139f4afbb45c71 Feb 28 04:40:45 crc kubenswrapper[5014]: I0228 04:40:45.706565 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:40:45 crc kubenswrapper[5014]: I0228 04:40:45.707136 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:40:46 crc kubenswrapper[5014]: I0228 04:40:46.157597 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9tfwt"] Feb 28 04:40:46 crc kubenswrapper[5014]: I0228 04:40:46.158603 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9tfwt" Feb 28 04:40:46 crc kubenswrapper[5014]: I0228 04:40:46.162530 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 28 04:40:46 crc kubenswrapper[5014]: I0228 04:40:46.167723 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9tfwt"] Feb 28 04:40:46 crc kubenswrapper[5014]: I0228 04:40:46.258428 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86fe7b38-7d96-499b-a693-397309da77bd-catalog-content\") pod \"redhat-operators-9tfwt\" (UID: \"86fe7b38-7d96-499b-a693-397309da77bd\") " pod="openshift-marketplace/redhat-operators-9tfwt" Feb 28 04:40:46 crc kubenswrapper[5014]: I0228 04:40:46.258881 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfskq\" (UniqueName: \"kubernetes.io/projected/86fe7b38-7d96-499b-a693-397309da77bd-kube-api-access-tfskq\") pod \"redhat-operators-9tfwt\" (UID: \"86fe7b38-7d96-499b-a693-397309da77bd\") " pod="openshift-marketplace/redhat-operators-9tfwt" Feb 28 04:40:46 crc kubenswrapper[5014]: I0228 04:40:46.259068 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86fe7b38-7d96-499b-a693-397309da77bd-utilities\") pod \"redhat-operators-9tfwt\" (UID: \"86fe7b38-7d96-499b-a693-397309da77bd\") " pod="openshift-marketplace/redhat-operators-9tfwt" Feb 28 04:40:46 crc kubenswrapper[5014]: I0228 04:40:46.274507 5014 generic.go:334] "Generic (PLEG): container finished" podID="c78f8995-32df-4c90-9919-e5e6f53c16ed" containerID="f40238554c1c96238b04cd75e998394b6af282bbb511093fd45a335cbe57ff85" exitCode=0 Feb 28 04:40:46 crc kubenswrapper[5014]: I0228 04:40:46.274552 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n92hm" event={"ID":"c78f8995-32df-4c90-9919-e5e6f53c16ed","Type":"ContainerDied","Data":"f40238554c1c96238b04cd75e998394b6af282bbb511093fd45a335cbe57ff85"} Feb 28 04:40:46 crc kubenswrapper[5014]: I0228 04:40:46.274580 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n92hm" event={"ID":"c78f8995-32df-4c90-9919-e5e6f53c16ed","Type":"ContainerStarted","Data":"db2137b8c1ca4cd9e5604916fbdc9835facb4b5f27726d9b62139f4afbb45c71"} Feb 28 04:40:46 crc kubenswrapper[5014]: I0228 04:40:46.359693 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86fe7b38-7d96-499b-a693-397309da77bd-utilities\") pod \"redhat-operators-9tfwt\" (UID: \"86fe7b38-7d96-499b-a693-397309da77bd\") " pod="openshift-marketplace/redhat-operators-9tfwt" Feb 28 04:40:46 crc kubenswrapper[5014]: I0228 04:40:46.359780 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86fe7b38-7d96-499b-a693-397309da77bd-catalog-content\") pod \"redhat-operators-9tfwt\" (UID: \"86fe7b38-7d96-499b-a693-397309da77bd\") " pod="openshift-marketplace/redhat-operators-9tfwt" Feb 28 04:40:46 crc kubenswrapper[5014]: I0228 04:40:46.359840 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfskq\" (UniqueName: \"kubernetes.io/projected/86fe7b38-7d96-499b-a693-397309da77bd-kube-api-access-tfskq\") pod \"redhat-operators-9tfwt\" (UID: \"86fe7b38-7d96-499b-a693-397309da77bd\") " pod="openshift-marketplace/redhat-operators-9tfwt" Feb 28 04:40:46 crc kubenswrapper[5014]: I0228 04:40:46.360413 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86fe7b38-7d96-499b-a693-397309da77bd-utilities\") pod \"redhat-operators-9tfwt\" (UID: \"86fe7b38-7d96-499b-a693-397309da77bd\") " pod="openshift-marketplace/redhat-operators-9tfwt" Feb 28 04:40:46 crc kubenswrapper[5014]: I0228 04:40:46.360438 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86fe7b38-7d96-499b-a693-397309da77bd-catalog-content\") pod \"redhat-operators-9tfwt\" (UID: \"86fe7b38-7d96-499b-a693-397309da77bd\") " pod="openshift-marketplace/redhat-operators-9tfwt" Feb 28 04:40:46 crc kubenswrapper[5014]: I0228 04:40:46.386891 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfskq\" (UniqueName: \"kubernetes.io/projected/86fe7b38-7d96-499b-a693-397309da77bd-kube-api-access-tfskq\") pod \"redhat-operators-9tfwt\" (UID: \"86fe7b38-7d96-499b-a693-397309da77bd\") " pod="openshift-marketplace/redhat-operators-9tfwt" Feb 28 04:40:46 crc kubenswrapper[5014]: I0228 04:40:46.482561 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9tfwt" Feb 28 04:40:46 crc kubenswrapper[5014]: I0228 04:40:46.898510 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9tfwt"] Feb 28 04:40:46 crc kubenswrapper[5014]: W0228 04:40:46.902572 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86fe7b38_7d96_499b_a693_397309da77bd.slice/crio-3de7a9d5556540d7ecd3d97dc8c574b61bcacf2ae58d7845a1ffee14d56b1f67 WatchSource:0}: Error finding container 3de7a9d5556540d7ecd3d97dc8c574b61bcacf2ae58d7845a1ffee14d56b1f67: Status 404 returned error can't find the container with id 3de7a9d5556540d7ecd3d97dc8c574b61bcacf2ae58d7845a1ffee14d56b1f67 Feb 28 04:40:47 crc kubenswrapper[5014]: I0228 04:40:47.150693 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rmvfd"] Feb 28 04:40:47 crc kubenswrapper[5014]: I0228 04:40:47.151988 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmvfd" Feb 28 04:40:47 crc kubenswrapper[5014]: I0228 04:40:47.153947 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 28 04:40:47 crc kubenswrapper[5014]: I0228 04:40:47.164392 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rmvfd"] Feb 28 04:40:47 crc kubenswrapper[5014]: I0228 04:40:47.275330 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e65a2cc1-a391-48ab-a843-e86f58cf278a-catalog-content\") pod \"community-operators-rmvfd\" (UID: \"e65a2cc1-a391-48ab-a843-e86f58cf278a\") " pod="openshift-marketplace/community-operators-rmvfd" Feb 28 04:40:47 crc kubenswrapper[5014]: I0228 04:40:47.275422 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e65a2cc1-a391-48ab-a843-e86f58cf278a-utilities\") pod \"community-operators-rmvfd\" (UID: \"e65a2cc1-a391-48ab-a843-e86f58cf278a\") " pod="openshift-marketplace/community-operators-rmvfd" Feb 28 04:40:47 crc kubenswrapper[5014]: I0228 04:40:47.275560 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhldb\" (UniqueName: \"kubernetes.io/projected/e65a2cc1-a391-48ab-a843-e86f58cf278a-kube-api-access-xhldb\") pod \"community-operators-rmvfd\" (UID: \"e65a2cc1-a391-48ab-a843-e86f58cf278a\") " pod="openshift-marketplace/community-operators-rmvfd" Feb 28 04:40:47 crc kubenswrapper[5014]: I0228 04:40:47.282152 5014 generic.go:334] "Generic (PLEG): container finished" podID="86fe7b38-7d96-499b-a693-397309da77bd" containerID="061824d503f802b6bc085319a90a9d5e58dc56ed6136c5f905c997249a692029" exitCode=0 Feb 28 04:40:47 crc kubenswrapper[5014]: I0228 04:40:47.282206 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tfwt" event={"ID":"86fe7b38-7d96-499b-a693-397309da77bd","Type":"ContainerDied","Data":"061824d503f802b6bc085319a90a9d5e58dc56ed6136c5f905c997249a692029"} Feb 28 04:40:47 crc kubenswrapper[5014]: I0228 04:40:47.282383 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tfwt" event={"ID":"86fe7b38-7d96-499b-a693-397309da77bd","Type":"ContainerStarted","Data":"3de7a9d5556540d7ecd3d97dc8c574b61bcacf2ae58d7845a1ffee14d56b1f67"} Feb 28 04:40:47 crc kubenswrapper[5014]: I0228 04:40:47.287521 5014 generic.go:334] "Generic (PLEG): container finished" podID="c78f8995-32df-4c90-9919-e5e6f53c16ed" containerID="0549d027ec4cebf40fb7f16feef991d98ddbebc4ea746cd4d062aba1372c0b4c" exitCode=0 Feb 28 04:40:47 crc kubenswrapper[5014]: I0228 04:40:47.287568 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n92hm" event={"ID":"c78f8995-32df-4c90-9919-e5e6f53c16ed","Type":"ContainerDied","Data":"0549d027ec4cebf40fb7f16feef991d98ddbebc4ea746cd4d062aba1372c0b4c"} Feb 28 04:40:47 crc kubenswrapper[5014]: I0228 04:40:47.293206 5014 generic.go:334] "Generic (PLEG): container finished" podID="1286af62-b972-4b45-a18b-f7e0085a1a69" containerID="8d12a1b135add7cee13c914ef3e60e24cd992f5576e40e93aca63e2f4d9b76b6" exitCode=0 Feb 28 04:40:47 crc kubenswrapper[5014]: I0228 04:40:47.293277 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kc599" event={"ID":"1286af62-b972-4b45-a18b-f7e0085a1a69","Type":"ContainerDied","Data":"8d12a1b135add7cee13c914ef3e60e24cd992f5576e40e93aca63e2f4d9b76b6"} Feb 28 04:40:47 crc kubenswrapper[5014]: I0228 04:40:47.377187 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e65a2cc1-a391-48ab-a843-e86f58cf278a-catalog-content\") pod \"community-operators-rmvfd\" (UID: \"e65a2cc1-a391-48ab-a843-e86f58cf278a\") " pod="openshift-marketplace/community-operators-rmvfd" Feb 28 04:40:47 crc kubenswrapper[5014]: I0228 04:40:47.377258 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e65a2cc1-a391-48ab-a843-e86f58cf278a-utilities\") pod \"community-operators-rmvfd\" (UID: \"e65a2cc1-a391-48ab-a843-e86f58cf278a\") " pod="openshift-marketplace/community-operators-rmvfd" Feb 28 04:40:47 crc kubenswrapper[5014]: I0228 04:40:47.377934 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhldb\" (UniqueName: \"kubernetes.io/projected/e65a2cc1-a391-48ab-a843-e86f58cf278a-kube-api-access-xhldb\") pod \"community-operators-rmvfd\" (UID: \"e65a2cc1-a391-48ab-a843-e86f58cf278a\") " pod="openshift-marketplace/community-operators-rmvfd" Feb 28 04:40:47 crc kubenswrapper[5014]: I0228 04:40:47.378482 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e65a2cc1-a391-48ab-a843-e86f58cf278a-catalog-content\") pod \"community-operators-rmvfd\" (UID: \"e65a2cc1-a391-48ab-a843-e86f58cf278a\") " pod="openshift-marketplace/community-operators-rmvfd" Feb 28 04:40:47 crc kubenswrapper[5014]: I0228 04:40:47.378841 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e65a2cc1-a391-48ab-a843-e86f58cf278a-utilities\") pod \"community-operators-rmvfd\" (UID: \"e65a2cc1-a391-48ab-a843-e86f58cf278a\") " pod="openshift-marketplace/community-operators-rmvfd" Feb 28 04:40:47 crc kubenswrapper[5014]: I0228 04:40:47.405865 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhldb\" (UniqueName: \"kubernetes.io/projected/e65a2cc1-a391-48ab-a843-e86f58cf278a-kube-api-access-xhldb\") pod \"community-operators-rmvfd\" (UID: \"e65a2cc1-a391-48ab-a843-e86f58cf278a\") " pod="openshift-marketplace/community-operators-rmvfd" Feb 28 04:40:47 crc kubenswrapper[5014]: I0228 04:40:47.530186 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rmvfd" Feb 28 04:40:47 crc kubenswrapper[5014]: I0228 04:40:47.777047 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rmvfd"] Feb 28 04:40:47 crc kubenswrapper[5014]: W0228 04:40:47.793570 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode65a2cc1_a391_48ab_a843_e86f58cf278a.slice/crio-394a2c930dadb61e4f6849469d25f58596c53eccdb60ccfb75ca3bdfd48f62b5 WatchSource:0}: Error finding container 394a2c930dadb61e4f6849469d25f58596c53eccdb60ccfb75ca3bdfd48f62b5: Status 404 returned error can't find the container with id 394a2c930dadb61e4f6849469d25f58596c53eccdb60ccfb75ca3bdfd48f62b5 Feb 28 04:40:48 crc kubenswrapper[5014]: I0228 04:40:48.301119 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n92hm" event={"ID":"c78f8995-32df-4c90-9919-e5e6f53c16ed","Type":"ContainerStarted","Data":"250d8c9c7f449cb4157b726cb62a69901dd24256f1980244bf17b673798f8de5"} Feb 28 04:40:48 crc kubenswrapper[5014]: I0228 04:40:48.303209 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-kc599" event={"ID":"1286af62-b972-4b45-a18b-f7e0085a1a69","Type":"ContainerStarted","Data":"8836fed1d39fd6e94db9eafc6015c8b5fc5df1d912026609af5bdd68e3464ff6"} Feb 28 04:40:48 crc kubenswrapper[5014]: I0228 04:40:48.304560 5014 generic.go:334] "Generic (PLEG): container finished" podID="e65a2cc1-a391-48ab-a843-e86f58cf278a" containerID="3844abf09f84090d5a24c0d9d45216b068ab1b460394216f2d7deaa6857497e3" exitCode=0 Feb 28 04:40:48 crc kubenswrapper[5014]: I0228 04:40:48.304653 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmvfd" event={"ID":"e65a2cc1-a391-48ab-a843-e86f58cf278a","Type":"ContainerDied","Data":"3844abf09f84090d5a24c0d9d45216b068ab1b460394216f2d7deaa6857497e3"} Feb 28 04:40:48 crc kubenswrapper[5014]: I0228 04:40:48.304688 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmvfd" event={"ID":"e65a2cc1-a391-48ab-a843-e86f58cf278a","Type":"ContainerStarted","Data":"394a2c930dadb61e4f6849469d25f58596c53eccdb60ccfb75ca3bdfd48f62b5"} Feb 28 04:40:48 crc kubenswrapper[5014]: I0228 04:40:48.307275 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tfwt" event={"ID":"86fe7b38-7d96-499b-a693-397309da77bd","Type":"ContainerStarted","Data":"0859da6064bdb7df0d0c121016017b912699cc65f03af64f88129a591d8fdb78"} Feb 28 04:40:48 crc kubenswrapper[5014]: I0228 04:40:48.327549 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n92hm" podStartSLOduration=2.924424797 podStartE2EDuration="4.327530845s" podCreationTimestamp="2026-02-28 04:40:44 +0000 UTC" firstStartedPulling="2026-02-28 04:40:46.27835137 +0000 UTC m=+434.948477280" lastFinishedPulling="2026-02-28 04:40:47.681457418 +0000 UTC m=+436.351583328" observedRunningTime="2026-02-28 04:40:48.326313701 +0000 UTC m=+436.996439611" watchObservedRunningTime="2026-02-28 04:40:48.327530845 +0000 UTC m=+436.997656755" Feb 28 04:40:49 crc kubenswrapper[5014]: I0228 04:40:49.316463 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmvfd" event={"ID":"e65a2cc1-a391-48ab-a843-e86f58cf278a","Type":"ContainerStarted","Data":"dc70ec25cb475c2b6ce06b9b21588e5a533af3c43a7e1a1ea8bcfc18f4cb3f1e"} Feb 28 04:40:49 crc kubenswrapper[5014]: I0228 04:40:49.319697 5014 generic.go:334] "Generic (PLEG): container finished" podID="86fe7b38-7d96-499b-a693-397309da77bd" containerID="0859da6064bdb7df0d0c121016017b912699cc65f03af64f88129a591d8fdb78" exitCode=0 Feb 28 04:40:49 crc kubenswrapper[5014]: I0228 04:40:49.319799 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tfwt" event={"ID":"86fe7b38-7d96-499b-a693-397309da77bd","Type":"ContainerDied","Data":"0859da6064bdb7df0d0c121016017b912699cc65f03af64f88129a591d8fdb78"} Feb 28 04:40:49 crc kubenswrapper[5014]: I0228 04:40:49.338552 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-kc599" podStartSLOduration=3.921603556 podStartE2EDuration="6.338532185s" podCreationTimestamp="2026-02-28 04:40:43 +0000 UTC" firstStartedPulling="2026-02-28 04:40:45.269047569 +0000 UTC m=+433.939173489" lastFinishedPulling="2026-02-28 04:40:47.685976208 +0000 UTC m=+436.356102118" observedRunningTime="2026-02-28 04:40:48.389372745 +0000 UTC m=+437.059498655" watchObservedRunningTime="2026-02-28 04:40:49.338532185 +0000 UTC m=+438.008658105" Feb 28 04:40:50 crc kubenswrapper[5014]: I0228 04:40:50.340050 5014 generic.go:334] "Generic (PLEG): container finished" podID="e65a2cc1-a391-48ab-a843-e86f58cf278a" containerID="dc70ec25cb475c2b6ce06b9b21588e5a533af3c43a7e1a1ea8bcfc18f4cb3f1e" exitCode=0 Feb 28 04:40:50 crc kubenswrapper[5014]: I0228 04:40:50.340180 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmvfd" event={"ID":"e65a2cc1-a391-48ab-a843-e86f58cf278a","Type":"ContainerDied","Data":"dc70ec25cb475c2b6ce06b9b21588e5a533af3c43a7e1a1ea8bcfc18f4cb3f1e"} Feb 28 04:40:50 crc kubenswrapper[5014]: I0228 04:40:50.354429 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9tfwt" event={"ID":"86fe7b38-7d96-499b-a693-397309da77bd","Type":"ContainerStarted","Data":"4e674b43b14a718dcab80f6d62e4c8aad61d515c1ef00ac76f7ec428612e577b"} Feb 28 04:40:50 crc kubenswrapper[5014]: I0228 04:40:50.406578 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9tfwt" podStartSLOduration=1.781065458 podStartE2EDuration="4.406559505s" podCreationTimestamp="2026-02-28 04:40:46 +0000 UTC" firstStartedPulling="2026-02-28 04:40:47.283781039 +0000 UTC m=+435.953906949" lastFinishedPulling="2026-02-28 04:40:49.909275076 +0000 UTC m=+438.579400996" observedRunningTime="2026-02-28 04:40:50.405267119 +0000 UTC m=+439.075393109" watchObservedRunningTime="2026-02-28 04:40:50.406559505 +0000 UTC m=+439.076685425" Feb 28 04:40:51 crc kubenswrapper[5014]: I0228 04:40:51.360902 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rmvfd" event={"ID":"e65a2cc1-a391-48ab-a843-e86f58cf278a","Type":"ContainerStarted","Data":"b45127e67beb2eaa1035ece5bd707123da65eac523a89443f24c97fa48ad9570"} Feb 28 04:40:51 crc kubenswrapper[5014]: I0228 04:40:51.382157 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rmvfd" podStartSLOduration=1.900547432 podStartE2EDuration="4.382138751s" podCreationTimestamp="2026-02-28 04:40:47 +0000 UTC" firstStartedPulling="2026-02-28 04:40:48.30602258 +0000 UTC m=+436.976148500" lastFinishedPulling="2026-02-28 04:40:50.787613899 +0000 UTC m=+439.457739819" observedRunningTime="2026-02-28 04:40:51.378842166 +0000 UTC m=+440.048968096" watchObservedRunningTime="2026-02-28 04:40:51.382138751 +0000 UTC m=+440.052264671" Feb 28 04:40:54 crc kubenswrapper[5014]: I0228 04:40:54.128081 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-kc599" Feb 28 04:40:54 crc kubenswrapper[5014]: I0228 04:40:54.128155 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-kc599" Feb 28 04:40:54 crc kubenswrapper[5014]: I0228 04:40:54.192262 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-kc599" Feb 28 04:40:54 crc kubenswrapper[5014]: I0228 04:40:54.417675 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-kc599" Feb 28 04:40:55 crc kubenswrapper[5014]: I0228 04:40:55.069203 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-k459l" Feb 28 04:40:55 crc kubenswrapper[5014]: I0228 04:40:55.097403 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n92hm" Feb 28 04:40:55 crc kubenswrapper[5014]: I0228 04:40:55.097475 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n92hm" Feb 28 04:40:55 crc kubenswrapper[5014]: I0228 04:40:55.155303 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sm9r4"] Feb 28 04:40:55 crc kubenswrapper[5014]: I0228 04:40:55.197660 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n92hm" Feb 28 04:40:55 crc kubenswrapper[5014]: I0228 04:40:55.425641 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n92hm" Feb 28 04:40:56 crc kubenswrapper[5014]: I0228 04:40:56.482839 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9tfwt" Feb 28 04:40:56 crc kubenswrapper[5014]: I0228 04:40:56.482890 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9tfwt" Feb 28 04:40:57 crc kubenswrapper[5014]: I0228 04:40:57.530589 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rmvfd" Feb 28 04:40:57 crc kubenswrapper[5014]: I0228 04:40:57.530661 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rmvfd" Feb 28 04:40:57 crc kubenswrapper[5014]: I0228 04:40:57.550143 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9tfwt" podUID="86fe7b38-7d96-499b-a693-397309da77bd" containerName="registry-server" probeResult="failure" output=< Feb 28 04:40:57 crc kubenswrapper[5014]: timeout: failed to connect service ":50051" within 1s Feb 28 04:40:57 crc kubenswrapper[5014]: > Feb 28 04:40:57 crc kubenswrapper[5014]: I0228 04:40:57.592312 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rmvfd" Feb 28 04:40:58 crc kubenswrapper[5014]: I0228 04:40:58.448325 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rmvfd" Feb 28 04:41:06 crc kubenswrapper[5014]: I0228 04:41:06.536993 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9tfwt" Feb 28 04:41:06 crc kubenswrapper[5014]: I0228 04:41:06.589254 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9tfwt" Feb 28 04:41:13 crc kubenswrapper[5014]: I0228 04:41:13.151756 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:41:13 crc kubenswrapper[5014]: I0228 04:41:13.152181 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:41:13 crc kubenswrapper[5014]: I0228 04:41:13.153205 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:41:13 crc kubenswrapper[5014]: I0228 04:41:13.158942 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:41:13 crc kubenswrapper[5014]: I0228 04:41:13.287362 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 28 04:41:14 crc kubenswrapper[5014]: I0228 04:41:14.266500 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:41:14 crc kubenswrapper[5014]: I0228 04:41:14.267177 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:41:14 crc kubenswrapper[5014]: I0228 04:41:14.273669 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:41:14 crc kubenswrapper[5014]: I0228 04:41:14.273920 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:41:14 crc kubenswrapper[5014]: I0228 04:41:14.487051 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 28 04:41:14 crc kubenswrapper[5014]: I0228 04:41:14.511544 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3cd6fe18da04e6498d39d895fbadd5fcc179ed0c5b2446171b048487d4e016e8"} Feb 28 04:41:14 crc kubenswrapper[5014]: I0228 04:41:14.511632 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b5cff71f51769337aa8c89b13884329c69be140bbb0d2d05df9c7450098753a9"} Feb 28 04:41:14 crc kubenswrapper[5014]: I0228 04:41:14.583589 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:41:15 crc kubenswrapper[5014]: W0228 04:41:15.088652 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-f4c0ae073a5d041f324e7c2bde1d159e26c4c68bd942b161eee0a388d1a6a3c4 WatchSource:0}: Error finding container f4c0ae073a5d041f324e7c2bde1d159e26c4c68bd942b161eee0a388d1a6a3c4: Status 404 returned error can't find the container with id f4c0ae073a5d041f324e7c2bde1d159e26c4c68bd942b161eee0a388d1a6a3c4 Feb 28 04:41:15 crc kubenswrapper[5014]: I0228 04:41:15.519053 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"f4c0ae073a5d041f324e7c2bde1d159e26c4c68bd942b161eee0a388d1a6a3c4"} Feb 28 04:41:15 crc kubenswrapper[5014]: I0228 04:41:15.521348 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ea4f59ba94a9e7ab5256759230259a74b369f7b15d98bb455a139cd263b06a62"} Feb 28 04:41:15 crc kubenswrapper[5014]: I0228 04:41:15.706340 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:41:15 crc kubenswrapper[5014]: I0228 04:41:15.706416 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:41:16 crc kubenswrapper[5014]: I0228 04:41:16.528430 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"7f1373062b13bbdb71632a1fe4c2d28803348d7023c6cb099f2c800fb0cea719"} Feb 28 04:41:16 crc kubenswrapper[5014]: I0228 04:41:16.530682 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"be123e3330985ae6ddc51177ce1d1643451aba071e6aa5bcc96588fa2b719a85"} Feb 28 04:41:16 crc kubenswrapper[5014]: I0228 04:41:16.531262 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:41:20 crc kubenswrapper[5014]: I0228 04:41:20.201674 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" podUID="8bf1ab3c-8003-4a48-b248-30282df03e95" containerName="registry" containerID="cri-o://086fbbe28e04b9e93dd7ac8173bbf7a394ce52c08b15c706dd7f807121ae923e" gracePeriod=30 Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.454200 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.573976 5014 generic.go:334] "Generic (PLEG): container finished" podID="8bf1ab3c-8003-4a48-b248-30282df03e95" containerID="086fbbe28e04b9e93dd7ac8173bbf7a394ce52c08b15c706dd7f807121ae923e" exitCode=0 Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.574034 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" event={"ID":"8bf1ab3c-8003-4a48-b248-30282df03e95","Type":"ContainerDied","Data":"086fbbe28e04b9e93dd7ac8173bbf7a394ce52c08b15c706dd7f807121ae923e"} Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.574064 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" event={"ID":"8bf1ab3c-8003-4a48-b248-30282df03e95","Type":"ContainerDied","Data":"b5e662359242bbec23461b918802ed6766e5f79840cd2b10863a0b9a225910dc"} Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.574089 5014 scope.go:117] "RemoveContainer" containerID="086fbbe28e04b9e93dd7ac8173bbf7a394ce52c08b15c706dd7f807121ae923e" Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.574037 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-sm9r4" Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.600667 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8bf1ab3c-8003-4a48-b248-30282df03e95-registry-tls\") pod \"8bf1ab3c-8003-4a48-b248-30282df03e95\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.600713 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8bf1ab3c-8003-4a48-b248-30282df03e95-bound-sa-token\") pod \"8bf1ab3c-8003-4a48-b248-30282df03e95\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.600737 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bf1ab3c-8003-4a48-b248-30282df03e95-trusted-ca\") pod \"8bf1ab3c-8003-4a48-b248-30282df03e95\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.600785 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfdcp\" (UniqueName: \"kubernetes.io/projected/8bf1ab3c-8003-4a48-b248-30282df03e95-kube-api-access-nfdcp\") pod \"8bf1ab3c-8003-4a48-b248-30282df03e95\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.600839 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8bf1ab3c-8003-4a48-b248-30282df03e95-ca-trust-extracted\") pod \"8bf1ab3c-8003-4a48-b248-30282df03e95\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.601565 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8bf1ab3c-8003-4a48-b248-30282df03e95\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.601618 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8bf1ab3c-8003-4a48-b248-30282df03e95-installation-pull-secrets\") pod \"8bf1ab3c-8003-4a48-b248-30282df03e95\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.601642 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8bf1ab3c-8003-4a48-b248-30282df03e95-registry-certificates\") pod \"8bf1ab3c-8003-4a48-b248-30282df03e95\" (UID: \"8bf1ab3c-8003-4a48-b248-30282df03e95\") " Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.602450 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bf1ab3c-8003-4a48-b248-30282df03e95-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8bf1ab3c-8003-4a48-b248-30282df03e95" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.606998 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bf1ab3c-8003-4a48-b248-30282df03e95-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8bf1ab3c-8003-4a48-b248-30282df03e95" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.607356 5014 scope.go:117] "RemoveContainer" containerID="086fbbe28e04b9e93dd7ac8173bbf7a394ce52c08b15c706dd7f807121ae923e" Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.609858 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bf1ab3c-8003-4a48-b248-30282df03e95-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8bf1ab3c-8003-4a48-b248-30282df03e95" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.609913 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bf1ab3c-8003-4a48-b248-30282df03e95-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8bf1ab3c-8003-4a48-b248-30282df03e95" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.617764 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8bf1ab3c-8003-4a48-b248-30282df03e95-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8bf1ab3c-8003-4a48-b248-30282df03e95" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:41:22 crc kubenswrapper[5014]: E0228 04:41:22.619150 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"086fbbe28e04b9e93dd7ac8173bbf7a394ce52c08b15c706dd7f807121ae923e\": container with ID starting with 086fbbe28e04b9e93dd7ac8173bbf7a394ce52c08b15c706dd7f807121ae923e not found: ID does not exist" containerID="086fbbe28e04b9e93dd7ac8173bbf7a394ce52c08b15c706dd7f807121ae923e" Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.619187 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"086fbbe28e04b9e93dd7ac8173bbf7a394ce52c08b15c706dd7f807121ae923e"} err="failed to get container status \"086fbbe28e04b9e93dd7ac8173bbf7a394ce52c08b15c706dd7f807121ae923e\": rpc error: code = NotFound desc = could not find container \"086fbbe28e04b9e93dd7ac8173bbf7a394ce52c08b15c706dd7f807121ae923e\": container with ID starting with 086fbbe28e04b9e93dd7ac8173bbf7a394ce52c08b15c706dd7f807121ae923e not found: ID does not exist" Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.619330 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bf1ab3c-8003-4a48-b248-30282df03e95-kube-api-access-nfdcp" (OuterVolumeSpecName: "kube-api-access-nfdcp") pod "8bf1ab3c-8003-4a48-b248-30282df03e95" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95"). InnerVolumeSpecName "kube-api-access-nfdcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.619914 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bf1ab3c-8003-4a48-b248-30282df03e95-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8bf1ab3c-8003-4a48-b248-30282df03e95" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.628026 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "8bf1ab3c-8003-4a48-b248-30282df03e95" (UID: "8bf1ab3c-8003-4a48-b248-30282df03e95"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.703076 5014 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8bf1ab3c-8003-4a48-b248-30282df03e95-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.703429 5014 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8bf1ab3c-8003-4a48-b248-30282df03e95-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.703446 5014 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8bf1ab3c-8003-4a48-b248-30282df03e95-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.703459 5014 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8bf1ab3c-8003-4a48-b248-30282df03e95-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.703471 5014 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bf1ab3c-8003-4a48-b248-30282df03e95-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.703483 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfdcp\" (UniqueName: \"kubernetes.io/projected/8bf1ab3c-8003-4a48-b248-30282df03e95-kube-api-access-nfdcp\") on node \"crc\" DevicePath \"\"" Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.703494 5014 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8bf1ab3c-8003-4a48-b248-30282df03e95-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.901082 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sm9r4"] Feb 28 04:41:22 crc kubenswrapper[5014]: I0228 04:41:22.906081 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-sm9r4"] Feb 28 04:41:24 crc kubenswrapper[5014]: I0228 04:41:24.181251 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bf1ab3c-8003-4a48-b248-30282df03e95" path="/var/lib/kubelet/pods/8bf1ab3c-8003-4a48-b248-30282df03e95/volumes" Feb 28 04:41:45 crc kubenswrapper[5014]: I0228 04:41:45.707178 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:41:45 crc kubenswrapper[5014]: I0228 04:41:45.707794 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:41:45 crc kubenswrapper[5014]: I0228 04:41:45.707873 5014 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 04:41:45 crc kubenswrapper[5014]: I0228 04:41:45.708501 5014 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3c2b8713a83a979e30942a4af450ca8224f253d52fbaf4696ad56965a2752095"} pod="openshift-machine-config-operator/machine-config-daemon-cct62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 04:41:45 crc kubenswrapper[5014]: I0228 04:41:45.708563 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" containerID="cri-o://3c2b8713a83a979e30942a4af450ca8224f253d52fbaf4696ad56965a2752095" gracePeriod=600 Feb 28 04:41:46 crc kubenswrapper[5014]: I0228 04:41:46.707239 5014 generic.go:334] "Generic (PLEG): container finished" podID="6aad0009-d904-48f8-8e30-82205907ece1" containerID="3c2b8713a83a979e30942a4af450ca8224f253d52fbaf4696ad56965a2752095" exitCode=0 Feb 28 04:41:46 crc kubenswrapper[5014]: I0228 04:41:46.707367 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerDied","Data":"3c2b8713a83a979e30942a4af450ca8224f253d52fbaf4696ad56965a2752095"} Feb 28 04:41:46 crc kubenswrapper[5014]: I0228 04:41:46.707959 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerStarted","Data":"76173414ea4b12400d46550fd1e95f3e073ea3c531dbe0f494a8f9363fcd0372"} Feb 28 04:41:46 crc kubenswrapper[5014]: I0228 04:41:46.707993 5014 scope.go:117] "RemoveContainer" containerID="40cbccd31c912a9ab6a9e8637016dc40a6dfd40522302f9192f50bd3b860a550" Feb 28 04:41:54 crc kubenswrapper[5014]: I0228 04:41:54.588867 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 28 04:42:00 crc kubenswrapper[5014]: I0228 04:42:00.134485 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537562-gm7z8"] Feb 28 04:42:00 crc kubenswrapper[5014]: E0228 04:42:00.136030 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf1ab3c-8003-4a48-b248-30282df03e95" containerName="registry" Feb 28 04:42:00 crc kubenswrapper[5014]: I0228 04:42:00.136111 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf1ab3c-8003-4a48-b248-30282df03e95" containerName="registry" Feb 28 04:42:00 crc kubenswrapper[5014]: I0228 04:42:00.136274 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf1ab3c-8003-4a48-b248-30282df03e95" containerName="registry" Feb 28 04:42:00 crc kubenswrapper[5014]: I0228 04:42:00.136699 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537562-gm7z8" Feb 28 04:42:00 crc kubenswrapper[5014]: I0228 04:42:00.140576 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:42:00 crc kubenswrapper[5014]: I0228 04:42:00.140904 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 04:42:00 crc kubenswrapper[5014]: I0228 04:42:00.141954 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:42:00 crc kubenswrapper[5014]: I0228 04:42:00.146790 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537562-gm7z8"] Feb 28 04:42:00 crc kubenswrapper[5014]: I0228 04:42:00.260642 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzjfp\" (UniqueName: \"kubernetes.io/projected/e29226d0-8e4d-4cd1-9353-d7b85b709a7c-kube-api-access-rzjfp\") pod \"auto-csr-approver-29537562-gm7z8\" (UID: \"e29226d0-8e4d-4cd1-9353-d7b85b709a7c\") " pod="openshift-infra/auto-csr-approver-29537562-gm7z8" Feb 28 04:42:00 crc kubenswrapper[5014]: I0228 04:42:00.363475 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzjfp\" (UniqueName: \"kubernetes.io/projected/e29226d0-8e4d-4cd1-9353-d7b85b709a7c-kube-api-access-rzjfp\") pod \"auto-csr-approver-29537562-gm7z8\" (UID: \"e29226d0-8e4d-4cd1-9353-d7b85b709a7c\") " pod="openshift-infra/auto-csr-approver-29537562-gm7z8" Feb 28 04:42:00 crc kubenswrapper[5014]: I0228 04:42:00.405566 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzjfp\" (UniqueName: \"kubernetes.io/projected/e29226d0-8e4d-4cd1-9353-d7b85b709a7c-kube-api-access-rzjfp\") pod \"auto-csr-approver-29537562-gm7z8\" (UID: \"e29226d0-8e4d-4cd1-9353-d7b85b709a7c\") " pod="openshift-infra/auto-csr-approver-29537562-gm7z8" Feb 28 04:42:00 crc kubenswrapper[5014]: I0228 04:42:00.458222 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537562-gm7z8" Feb 28 04:42:00 crc kubenswrapper[5014]: I0228 04:42:00.698995 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537562-gm7z8"] Feb 28 04:42:00 crc kubenswrapper[5014]: I0228 04:42:00.788885 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537562-gm7z8" event={"ID":"e29226d0-8e4d-4cd1-9353-d7b85b709a7c","Type":"ContainerStarted","Data":"48f6e9a9c0c0d192f7236afd0d37f9ea7556de30da6611c6047d974556ab231d"} Feb 28 04:42:02 crc kubenswrapper[5014]: I0228 04:42:02.805424 5014 generic.go:334] "Generic (PLEG): container finished" podID="e29226d0-8e4d-4cd1-9353-d7b85b709a7c" containerID="78d3d44955358c2e893ac7c821f6f56971aa4fe61590d43765150483e9e63604" exitCode=0 Feb 28 04:42:02 crc kubenswrapper[5014]: I0228 04:42:02.805592 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537562-gm7z8" event={"ID":"e29226d0-8e4d-4cd1-9353-d7b85b709a7c","Type":"ContainerDied","Data":"78d3d44955358c2e893ac7c821f6f56971aa4fe61590d43765150483e9e63604"} Feb 28 04:42:04 crc kubenswrapper[5014]: I0228 04:42:04.132136 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537562-gm7z8" Feb 28 04:42:04 crc kubenswrapper[5014]: I0228 04:42:04.315780 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzjfp\" (UniqueName: \"kubernetes.io/projected/e29226d0-8e4d-4cd1-9353-d7b85b709a7c-kube-api-access-rzjfp\") pod \"e29226d0-8e4d-4cd1-9353-d7b85b709a7c\" (UID: \"e29226d0-8e4d-4cd1-9353-d7b85b709a7c\") " Feb 28 04:42:04 crc kubenswrapper[5014]: I0228 04:42:04.323353 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e29226d0-8e4d-4cd1-9353-d7b85b709a7c-kube-api-access-rzjfp" (OuterVolumeSpecName: "kube-api-access-rzjfp") pod "e29226d0-8e4d-4cd1-9353-d7b85b709a7c" (UID: "e29226d0-8e4d-4cd1-9353-d7b85b709a7c"). InnerVolumeSpecName "kube-api-access-rzjfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:42:04 crc kubenswrapper[5014]: I0228 04:42:04.417438 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzjfp\" (UniqueName: \"kubernetes.io/projected/e29226d0-8e4d-4cd1-9353-d7b85b709a7c-kube-api-access-rzjfp\") on node \"crc\" DevicePath \"\"" Feb 28 04:42:04 crc kubenswrapper[5014]: I0228 04:42:04.821344 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537562-gm7z8" event={"ID":"e29226d0-8e4d-4cd1-9353-d7b85b709a7c","Type":"ContainerDied","Data":"48f6e9a9c0c0d192f7236afd0d37f9ea7556de30da6611c6047d974556ab231d"} Feb 28 04:42:04 crc kubenswrapper[5014]: I0228 04:42:04.821393 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48f6e9a9c0c0d192f7236afd0d37f9ea7556de30da6611c6047d974556ab231d" Feb 28 04:42:04 crc kubenswrapper[5014]: I0228 04:42:04.821444 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537562-gm7z8" Feb 28 04:42:05 crc kubenswrapper[5014]: I0228 04:42:05.192208 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537556-wwqxk"] Feb 28 04:42:05 crc kubenswrapper[5014]: I0228 04:42:05.197385 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537556-wwqxk"] Feb 28 04:42:06 crc kubenswrapper[5014]: I0228 04:42:06.178787 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d84dec61-f4ef-4e0b-adb1-66694017a156" path="/var/lib/kubelet/pods/d84dec61-f4ef-4e0b-adb1-66694017a156/volumes" Feb 28 04:43:45 crc kubenswrapper[5014]: I0228 04:43:45.706427 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:43:45 crc kubenswrapper[5014]: I0228 04:43:45.707110 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:44:00 crc kubenswrapper[5014]: I0228 04:44:00.140475 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537564-j9zpx"] Feb 28 04:44:00 crc kubenswrapper[5014]: E0228 04:44:00.141415 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e29226d0-8e4d-4cd1-9353-d7b85b709a7c" containerName="oc" Feb 28 04:44:00 crc kubenswrapper[5014]: I0228 04:44:00.141444 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="e29226d0-8e4d-4cd1-9353-d7b85b709a7c" containerName="oc" Feb 28 04:44:00 crc kubenswrapper[5014]: I0228 04:44:00.141623 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="e29226d0-8e4d-4cd1-9353-d7b85b709a7c" containerName="oc" Feb 28 04:44:00 crc kubenswrapper[5014]: I0228 04:44:00.142372 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537564-j9zpx" Feb 28 04:44:00 crc kubenswrapper[5014]: I0228 04:44:00.145086 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:44:00 crc kubenswrapper[5014]: I0228 04:44:00.146162 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 04:44:00 crc kubenswrapper[5014]: I0228 04:44:00.146786 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:44:00 crc kubenswrapper[5014]: I0228 04:44:00.147081 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537564-j9zpx"] Feb 28 04:44:00 crc kubenswrapper[5014]: I0228 04:44:00.267545 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fs8cc\" (UniqueName: \"kubernetes.io/projected/1dd4f84a-9c92-49e2-8887-603c7560f417-kube-api-access-fs8cc\") pod \"auto-csr-approver-29537564-j9zpx\" (UID: \"1dd4f84a-9c92-49e2-8887-603c7560f417\") " pod="openshift-infra/auto-csr-approver-29537564-j9zpx" Feb 28 04:44:00 crc kubenswrapper[5014]: I0228 04:44:00.368700 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fs8cc\" (UniqueName: \"kubernetes.io/projected/1dd4f84a-9c92-49e2-8887-603c7560f417-kube-api-access-fs8cc\") pod \"auto-csr-approver-29537564-j9zpx\" (UID: \"1dd4f84a-9c92-49e2-8887-603c7560f417\") " pod="openshift-infra/auto-csr-approver-29537564-j9zpx" Feb 28 04:44:00 crc kubenswrapper[5014]: I0228 04:44:00.399599 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fs8cc\" (UniqueName: \"kubernetes.io/projected/1dd4f84a-9c92-49e2-8887-603c7560f417-kube-api-access-fs8cc\") pod \"auto-csr-approver-29537564-j9zpx\" (UID: \"1dd4f84a-9c92-49e2-8887-603c7560f417\") " pod="openshift-infra/auto-csr-approver-29537564-j9zpx" Feb 28 04:44:00 crc kubenswrapper[5014]: I0228 04:44:00.472718 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537564-j9zpx" Feb 28 04:44:00 crc kubenswrapper[5014]: I0228 04:44:00.707520 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537564-j9zpx"] Feb 28 04:44:00 crc kubenswrapper[5014]: I0228 04:44:00.716482 5014 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 04:44:01 crc kubenswrapper[5014]: I0228 04:44:01.515979 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537564-j9zpx" event={"ID":"1dd4f84a-9c92-49e2-8887-603c7560f417","Type":"ContainerStarted","Data":"7889879a28297c8e9f381e4b5675dbf698fa285c31119f9360fcb457ec537a86"} Feb 28 04:44:02 crc kubenswrapper[5014]: I0228 04:44:02.522671 5014 generic.go:334] "Generic (PLEG): container finished" podID="1dd4f84a-9c92-49e2-8887-603c7560f417" containerID="ddaf4450281323c2e85864564e21a30cc53471b4aec6c913c6d11cbc3f8658d9" exitCode=0 Feb 28 04:44:02 crc kubenswrapper[5014]: I0228 04:44:02.522716 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537564-j9zpx" event={"ID":"1dd4f84a-9c92-49e2-8887-603c7560f417","Type":"ContainerDied","Data":"ddaf4450281323c2e85864564e21a30cc53471b4aec6c913c6d11cbc3f8658d9"} Feb 28 04:44:03 crc kubenswrapper[5014]: I0228 04:44:03.807925 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537564-j9zpx" Feb 28 04:44:03 crc kubenswrapper[5014]: I0228 04:44:03.817995 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fs8cc\" (UniqueName: \"kubernetes.io/projected/1dd4f84a-9c92-49e2-8887-603c7560f417-kube-api-access-fs8cc\") pod \"1dd4f84a-9c92-49e2-8887-603c7560f417\" (UID: \"1dd4f84a-9c92-49e2-8887-603c7560f417\") " Feb 28 04:44:03 crc kubenswrapper[5014]: I0228 04:44:03.828197 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dd4f84a-9c92-49e2-8887-603c7560f417-kube-api-access-fs8cc" (OuterVolumeSpecName: "kube-api-access-fs8cc") pod "1dd4f84a-9c92-49e2-8887-603c7560f417" (UID: "1dd4f84a-9c92-49e2-8887-603c7560f417"). InnerVolumeSpecName "kube-api-access-fs8cc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:44:03 crc kubenswrapper[5014]: I0228 04:44:03.919401 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fs8cc\" (UniqueName: \"kubernetes.io/projected/1dd4f84a-9c92-49e2-8887-603c7560f417-kube-api-access-fs8cc\") on node \"crc\" DevicePath \"\"" Feb 28 04:44:04 crc kubenswrapper[5014]: I0228 04:44:04.535779 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537564-j9zpx" event={"ID":"1dd4f84a-9c92-49e2-8887-603c7560f417","Type":"ContainerDied","Data":"7889879a28297c8e9f381e4b5675dbf698fa285c31119f9360fcb457ec537a86"} Feb 28 04:44:04 crc kubenswrapper[5014]: I0228 04:44:04.535859 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7889879a28297c8e9f381e4b5675dbf698fa285c31119f9360fcb457ec537a86" Feb 28 04:44:04 crc kubenswrapper[5014]: I0228 04:44:04.535919 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537564-j9zpx" Feb 28 04:44:04 crc kubenswrapper[5014]: I0228 04:44:04.867270 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537558-4hwb5"] Feb 28 04:44:04 crc kubenswrapper[5014]: I0228 04:44:04.871934 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537558-4hwb5"] Feb 28 04:44:06 crc kubenswrapper[5014]: I0228 04:44:06.181611 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1943af29-93f0-470e-85e8-4d53409329ae" path="/var/lib/kubelet/pods/1943af29-93f0-470e-85e8-4d53409329ae/volumes" Feb 28 04:44:15 crc kubenswrapper[5014]: I0228 04:44:15.707029 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:44:15 crc kubenswrapper[5014]: I0228 04:44:15.707585 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:44:32 crc kubenswrapper[5014]: I0228 04:44:32.567503 5014 scope.go:117] "RemoveContainer" containerID="b45257578421382e8bcd79d70bbd064942c27c04cece4d5bcd45a77fe67a4811" Feb 28 04:44:32 crc kubenswrapper[5014]: I0228 04:44:32.617470 5014 scope.go:117] "RemoveContainer" containerID="c9871dfb2c0a80b9a516f34a24f0ee67574f66f811ec1a3cc30dd3d8b7578a01" Feb 28 04:44:45 crc kubenswrapper[5014]: I0228 04:44:45.706306 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:44:45 crc kubenswrapper[5014]: I0228 04:44:45.706658 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:44:45 crc kubenswrapper[5014]: I0228 04:44:45.706732 5014 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 04:44:45 crc kubenswrapper[5014]: I0228 04:44:45.707604 5014 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"76173414ea4b12400d46550fd1e95f3e073ea3c531dbe0f494a8f9363fcd0372"} pod="openshift-machine-config-operator/machine-config-daemon-cct62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 04:44:45 crc kubenswrapper[5014]: I0228 04:44:45.707720 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" containerID="cri-o://76173414ea4b12400d46550fd1e95f3e073ea3c531dbe0f494a8f9363fcd0372" gracePeriod=600 Feb 28 04:44:46 crc kubenswrapper[5014]: I0228 04:44:46.816467 5014 generic.go:334] "Generic (PLEG): container finished" podID="6aad0009-d904-48f8-8e30-82205907ece1" containerID="76173414ea4b12400d46550fd1e95f3e073ea3c531dbe0f494a8f9363fcd0372" exitCode=0 Feb 28 04:44:46 crc kubenswrapper[5014]: I0228 04:44:46.816577 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerDied","Data":"76173414ea4b12400d46550fd1e95f3e073ea3c531dbe0f494a8f9363fcd0372"} Feb 28 04:44:46 crc kubenswrapper[5014]: I0228 04:44:46.817088 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerStarted","Data":"3c623acb0fdab16e3036395527958cd8d0812619f2c3f18a285c60873b1031aa"} Feb 28 04:44:46 crc kubenswrapper[5014]: I0228 04:44:46.817116 5014 scope.go:117] "RemoveContainer" containerID="3c2b8713a83a979e30942a4af450ca8224f253d52fbaf4696ad56965a2752095" Feb 28 04:45:00 crc kubenswrapper[5014]: I0228 04:45:00.145968 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537565-8xslg"] Feb 28 04:45:00 crc kubenswrapper[5014]: E0228 04:45:00.146850 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1dd4f84a-9c92-49e2-8887-603c7560f417" containerName="oc" Feb 28 04:45:00 crc kubenswrapper[5014]: I0228 04:45:00.146866 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="1dd4f84a-9c92-49e2-8887-603c7560f417" containerName="oc" Feb 28 04:45:00 crc kubenswrapper[5014]: I0228 04:45:00.146987 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="1dd4f84a-9c92-49e2-8887-603c7560f417" containerName="oc" Feb 28 04:45:00 crc kubenswrapper[5014]: I0228 04:45:00.147429 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-8xslg" Feb 28 04:45:00 crc kubenswrapper[5014]: I0228 04:45:00.150343 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 04:45:00 crc kubenswrapper[5014]: I0228 04:45:00.150686 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 04:45:00 crc kubenswrapper[5014]: I0228 04:45:00.155011 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537565-8xslg"] Feb 28 04:45:00 crc kubenswrapper[5014]: I0228 04:45:00.278627 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/818c228c-0c91-4cb1-b010-40746252c8ee-config-volume\") pod \"collect-profiles-29537565-8xslg\" (UID: \"818c228c-0c91-4cb1-b010-40746252c8ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-8xslg" Feb 28 04:45:00 crc kubenswrapper[5014]: I0228 04:45:00.278724 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/818c228c-0c91-4cb1-b010-40746252c8ee-secret-volume\") pod \"collect-profiles-29537565-8xslg\" (UID: \"818c228c-0c91-4cb1-b010-40746252c8ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-8xslg" Feb 28 04:45:00 crc kubenswrapper[5014]: I0228 04:45:00.278769 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm2sj\" (UniqueName: \"kubernetes.io/projected/818c228c-0c91-4cb1-b010-40746252c8ee-kube-api-access-bm2sj\") pod \"collect-profiles-29537565-8xslg\" (UID: \"818c228c-0c91-4cb1-b010-40746252c8ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-8xslg" Feb 28 04:45:00 crc kubenswrapper[5014]: I0228 04:45:00.381036 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/818c228c-0c91-4cb1-b010-40746252c8ee-config-volume\") pod \"collect-profiles-29537565-8xslg\" (UID: \"818c228c-0c91-4cb1-b010-40746252c8ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-8xslg" Feb 28 04:45:00 crc kubenswrapper[5014]: I0228 04:45:00.381134 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/818c228c-0c91-4cb1-b010-40746252c8ee-secret-volume\") pod \"collect-profiles-29537565-8xslg\" (UID: \"818c228c-0c91-4cb1-b010-40746252c8ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-8xslg" Feb 28 04:45:00 crc kubenswrapper[5014]: I0228 04:45:00.381170 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm2sj\" (UniqueName: \"kubernetes.io/projected/818c228c-0c91-4cb1-b010-40746252c8ee-kube-api-access-bm2sj\") pod \"collect-profiles-29537565-8xslg\" (UID: \"818c228c-0c91-4cb1-b010-40746252c8ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-8xslg" Feb 28 04:45:00 crc kubenswrapper[5014]: I0228 04:45:00.383093 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/818c228c-0c91-4cb1-b010-40746252c8ee-config-volume\") pod \"collect-profiles-29537565-8xslg\" (UID: \"818c228c-0c91-4cb1-b010-40746252c8ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-8xslg" Feb 28 04:45:00 crc kubenswrapper[5014]: I0228 04:45:00.395326 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/818c228c-0c91-4cb1-b010-40746252c8ee-secret-volume\") pod \"collect-profiles-29537565-8xslg\" (UID: \"818c228c-0c91-4cb1-b010-40746252c8ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-8xslg" Feb 28 04:45:00 crc kubenswrapper[5014]: I0228 04:45:00.401384 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm2sj\" (UniqueName: \"kubernetes.io/projected/818c228c-0c91-4cb1-b010-40746252c8ee-kube-api-access-bm2sj\") pod \"collect-profiles-29537565-8xslg\" (UID: \"818c228c-0c91-4cb1-b010-40746252c8ee\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-8xslg" Feb 28 04:45:00 crc kubenswrapper[5014]: I0228 04:45:00.504238 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-8xslg" Feb 28 04:45:00 crc kubenswrapper[5014]: I0228 04:45:00.745666 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537565-8xslg"] Feb 28 04:45:00 crc kubenswrapper[5014]: W0228 04:45:00.757592 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod818c228c_0c91_4cb1_b010_40746252c8ee.slice/crio-ff03ae74764eaf503f756bcf07f630709fe0863f1d5e999e98e3f902ed83e155 WatchSource:0}: Error finding container ff03ae74764eaf503f756bcf07f630709fe0863f1d5e999e98e3f902ed83e155: Status 404 returned error can't find the container with id ff03ae74764eaf503f756bcf07f630709fe0863f1d5e999e98e3f902ed83e155 Feb 28 04:45:00 crc kubenswrapper[5014]: I0228 04:45:00.908912 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-8xslg" event={"ID":"818c228c-0c91-4cb1-b010-40746252c8ee","Type":"ContainerStarted","Data":"4bc77d39841b4bc81790985dd4b80ea6c4b0cb61aa9d1de599360a28ca2e1ad2"} Feb 28 04:45:00 crc kubenswrapper[5014]: I0228 04:45:00.909297 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-8xslg" event={"ID":"818c228c-0c91-4cb1-b010-40746252c8ee","Type":"ContainerStarted","Data":"ff03ae74764eaf503f756bcf07f630709fe0863f1d5e999e98e3f902ed83e155"} Feb 28 04:45:00 crc kubenswrapper[5014]: I0228 04:45:00.935108 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-8xslg" podStartSLOduration=0.935081012 podStartE2EDuration="935.081012ms" podCreationTimestamp="2026-02-28 04:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:45:00.928901965 +0000 UTC m=+689.599027875" watchObservedRunningTime="2026-02-28 04:45:00.935081012 +0000 UTC m=+689.605206952" Feb 28 04:45:01 crc kubenswrapper[5014]: I0228 04:45:01.918718 5014 generic.go:334] "Generic (PLEG): container finished" podID="818c228c-0c91-4cb1-b010-40746252c8ee" containerID="4bc77d39841b4bc81790985dd4b80ea6c4b0cb61aa9d1de599360a28ca2e1ad2" exitCode=0 Feb 28 04:45:01 crc kubenswrapper[5014]: I0228 04:45:01.918778 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-8xslg" event={"ID":"818c228c-0c91-4cb1-b010-40746252c8ee","Type":"ContainerDied","Data":"4bc77d39841b4bc81790985dd4b80ea6c4b0cb61aa9d1de599360a28ca2e1ad2"} Feb 28 04:45:03 crc kubenswrapper[5014]: I0228 04:45:03.166268 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-8xslg" Feb 28 04:45:03 crc kubenswrapper[5014]: I0228 04:45:03.322638 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/818c228c-0c91-4cb1-b010-40746252c8ee-config-volume\") pod \"818c228c-0c91-4cb1-b010-40746252c8ee\" (UID: \"818c228c-0c91-4cb1-b010-40746252c8ee\") " Feb 28 04:45:03 crc kubenswrapper[5014]: I0228 04:45:03.322747 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/818c228c-0c91-4cb1-b010-40746252c8ee-secret-volume\") pod \"818c228c-0c91-4cb1-b010-40746252c8ee\" (UID: \"818c228c-0c91-4cb1-b010-40746252c8ee\") " Feb 28 04:45:03 crc kubenswrapper[5014]: I0228 04:45:03.322910 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm2sj\" (UniqueName: \"kubernetes.io/projected/818c228c-0c91-4cb1-b010-40746252c8ee-kube-api-access-bm2sj\") pod \"818c228c-0c91-4cb1-b010-40746252c8ee\" (UID: \"818c228c-0c91-4cb1-b010-40746252c8ee\") " Feb 28 04:45:03 crc kubenswrapper[5014]: I0228 04:45:03.324449 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/818c228c-0c91-4cb1-b010-40746252c8ee-config-volume" (OuterVolumeSpecName: "config-volume") pod "818c228c-0c91-4cb1-b010-40746252c8ee" (UID: "818c228c-0c91-4cb1-b010-40746252c8ee"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:45:03 crc kubenswrapper[5014]: I0228 04:45:03.324744 5014 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/818c228c-0c91-4cb1-b010-40746252c8ee-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 04:45:03 crc kubenswrapper[5014]: I0228 04:45:03.328981 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818c228c-0c91-4cb1-b010-40746252c8ee-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "818c228c-0c91-4cb1-b010-40746252c8ee" (UID: "818c228c-0c91-4cb1-b010-40746252c8ee"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:45:03 crc kubenswrapper[5014]: I0228 04:45:03.330019 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/818c228c-0c91-4cb1-b010-40746252c8ee-kube-api-access-bm2sj" (OuterVolumeSpecName: "kube-api-access-bm2sj") pod "818c228c-0c91-4cb1-b010-40746252c8ee" (UID: "818c228c-0c91-4cb1-b010-40746252c8ee"). InnerVolumeSpecName "kube-api-access-bm2sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:45:03 crc kubenswrapper[5014]: I0228 04:45:03.425850 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm2sj\" (UniqueName: \"kubernetes.io/projected/818c228c-0c91-4cb1-b010-40746252c8ee-kube-api-access-bm2sj\") on node \"crc\" DevicePath \"\"" Feb 28 04:45:03 crc kubenswrapper[5014]: I0228 04:45:03.425884 5014 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/818c228c-0c91-4cb1-b010-40746252c8ee-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 04:45:03 crc kubenswrapper[5014]: I0228 04:45:03.932787 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-8xslg" event={"ID":"818c228c-0c91-4cb1-b010-40746252c8ee","Type":"ContainerDied","Data":"ff03ae74764eaf503f756bcf07f630709fe0863f1d5e999e98e3f902ed83e155"} Feb 28 04:45:03 crc kubenswrapper[5014]: I0228 04:45:03.932891 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff03ae74764eaf503f756bcf07f630709fe0863f1d5e999e98e3f902ed83e155" Feb 28 04:45:03 crc kubenswrapper[5014]: I0228 04:45:03.932918 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537565-8xslg" Feb 28 04:46:00 crc kubenswrapper[5014]: I0228 04:46:00.130998 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537566-7xlww"] Feb 28 04:46:00 crc kubenswrapper[5014]: E0228 04:46:00.131669 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="818c228c-0c91-4cb1-b010-40746252c8ee" containerName="collect-profiles" Feb 28 04:46:00 crc kubenswrapper[5014]: I0228 04:46:00.131684 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="818c228c-0c91-4cb1-b010-40746252c8ee" containerName="collect-profiles" Feb 28 04:46:00 crc kubenswrapper[5014]: I0228 04:46:00.131820 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="818c228c-0c91-4cb1-b010-40746252c8ee" containerName="collect-profiles" Feb 28 04:46:00 crc kubenswrapper[5014]: I0228 04:46:00.132269 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537566-7xlww" Feb 28 04:46:00 crc kubenswrapper[5014]: I0228 04:46:00.134890 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:46:00 crc kubenswrapper[5014]: I0228 04:46:00.135463 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:46:00 crc kubenswrapper[5014]: I0228 04:46:00.135862 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 04:46:00 crc kubenswrapper[5014]: I0228 04:46:00.204216 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537566-7xlww"] Feb 28 04:46:00 crc kubenswrapper[5014]: I0228 04:46:00.333416 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6mgd\" (UniqueName: \"kubernetes.io/projected/9af98cd8-9086-42ab-833a-2eb0d1fb73d5-kube-api-access-z6mgd\") pod \"auto-csr-approver-29537566-7xlww\" (UID: \"9af98cd8-9086-42ab-833a-2eb0d1fb73d5\") " pod="openshift-infra/auto-csr-approver-29537566-7xlww" Feb 28 04:46:00 crc kubenswrapper[5014]: I0228 04:46:00.434643 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6mgd\" (UniqueName: \"kubernetes.io/projected/9af98cd8-9086-42ab-833a-2eb0d1fb73d5-kube-api-access-z6mgd\") pod \"auto-csr-approver-29537566-7xlww\" (UID: \"9af98cd8-9086-42ab-833a-2eb0d1fb73d5\") " pod="openshift-infra/auto-csr-approver-29537566-7xlww" Feb 28 04:46:00 crc kubenswrapper[5014]: I0228 04:46:00.455871 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6mgd\" (UniqueName: \"kubernetes.io/projected/9af98cd8-9086-42ab-833a-2eb0d1fb73d5-kube-api-access-z6mgd\") pod \"auto-csr-approver-29537566-7xlww\" (UID: \"9af98cd8-9086-42ab-833a-2eb0d1fb73d5\") " pod="openshift-infra/auto-csr-approver-29537566-7xlww" Feb 28 04:46:00 crc kubenswrapper[5014]: I0228 04:46:00.506889 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537566-7xlww" Feb 28 04:46:00 crc kubenswrapper[5014]: I0228 04:46:00.900885 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537566-7xlww"] Feb 28 04:46:01 crc kubenswrapper[5014]: I0228 04:46:01.297725 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537566-7xlww" event={"ID":"9af98cd8-9086-42ab-833a-2eb0d1fb73d5","Type":"ContainerStarted","Data":"a34789993c0128a15491f288a6dd5a419b3cd5c41384e34c281553c2dffbfaab"} Feb 28 04:46:02 crc kubenswrapper[5014]: I0228 04:46:02.321265 5014 generic.go:334] "Generic (PLEG): container finished" podID="9af98cd8-9086-42ab-833a-2eb0d1fb73d5" containerID="39850295164bea58efb5f7091f3e17f94456f36d9c454a970df3a4e240bc0c36" exitCode=0 Feb 28 04:46:02 crc kubenswrapper[5014]: I0228 04:46:02.321355 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537566-7xlww" event={"ID":"9af98cd8-9086-42ab-833a-2eb0d1fb73d5","Type":"ContainerDied","Data":"39850295164bea58efb5f7091f3e17f94456f36d9c454a970df3a4e240bc0c36"} Feb 28 04:46:03 crc kubenswrapper[5014]: I0228 04:46:03.579377 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537566-7xlww" Feb 28 04:46:03 crc kubenswrapper[5014]: I0228 04:46:03.684517 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6mgd\" (UniqueName: \"kubernetes.io/projected/9af98cd8-9086-42ab-833a-2eb0d1fb73d5-kube-api-access-z6mgd\") pod \"9af98cd8-9086-42ab-833a-2eb0d1fb73d5\" (UID: \"9af98cd8-9086-42ab-833a-2eb0d1fb73d5\") " Feb 28 04:46:03 crc kubenswrapper[5014]: I0228 04:46:03.693404 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9af98cd8-9086-42ab-833a-2eb0d1fb73d5-kube-api-access-z6mgd" (OuterVolumeSpecName: "kube-api-access-z6mgd") pod "9af98cd8-9086-42ab-833a-2eb0d1fb73d5" (UID: "9af98cd8-9086-42ab-833a-2eb0d1fb73d5"). InnerVolumeSpecName "kube-api-access-z6mgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:46:03 crc kubenswrapper[5014]: I0228 04:46:03.786372 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6mgd\" (UniqueName: \"kubernetes.io/projected/9af98cd8-9086-42ab-833a-2eb0d1fb73d5-kube-api-access-z6mgd\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:04 crc kubenswrapper[5014]: I0228 04:46:04.340782 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537566-7xlww" event={"ID":"9af98cd8-9086-42ab-833a-2eb0d1fb73d5","Type":"ContainerDied","Data":"a34789993c0128a15491f288a6dd5a419b3cd5c41384e34c281553c2dffbfaab"} Feb 28 04:46:04 crc kubenswrapper[5014]: I0228 04:46:04.340843 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a34789993c0128a15491f288a6dd5a419b3cd5c41384e34c281553c2dffbfaab" Feb 28 04:46:04 crc kubenswrapper[5014]: I0228 04:46:04.340854 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537566-7xlww" Feb 28 04:46:04 crc kubenswrapper[5014]: I0228 04:46:04.651956 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537560-g4xd4"] Feb 28 04:46:04 crc kubenswrapper[5014]: I0228 04:46:04.657907 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537560-g4xd4"] Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.543629 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-pwx6w"] Feb 28 04:46:05 crc kubenswrapper[5014]: E0228 04:46:05.549266 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9af98cd8-9086-42ab-833a-2eb0d1fb73d5" containerName="oc" Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.549301 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="9af98cd8-9086-42ab-833a-2eb0d1fb73d5" containerName="oc" Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.549461 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="9af98cd8-9086-42ab-833a-2eb0d1fb73d5" containerName="oc" Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.550039 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-lnv49"] Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.550199 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-pwx6w" Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.551091 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-lnv49" Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.554781 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.555087 5014 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-zj6rw" Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.555130 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.555236 5014 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-kxgk6" Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.556133 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-pwx6w"] Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.565270 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-gwzqp"] Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.566120 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-gwzqp" Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.568625 5014 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-lgltl" Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.573473 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-lnv49"] Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.579716 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-gwzqp"] Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.621552 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbbcd\" (UniqueName: \"kubernetes.io/projected/74306563-899f-44f1-b51a-e9aed7bd437c-kube-api-access-sbbcd\") pod \"cert-manager-cainjector-cf98fcc89-pwx6w\" (UID: \"74306563-899f-44f1-b51a-e9aed7bd437c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-pwx6w" Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.621602 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62xd7\" (UniqueName: \"kubernetes.io/projected/efbeff5a-c04c-47c0-8c97-338798ffc76b-kube-api-access-62xd7\") pod \"cert-manager-858654f9db-lnv49\" (UID: \"efbeff5a-c04c-47c0-8c97-338798ffc76b\") " pod="cert-manager/cert-manager-858654f9db-lnv49" Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.621708 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n8rc\" (UniqueName: \"kubernetes.io/projected/f921b55b-c9e9-4183-a430-192642dc2b06-kube-api-access-8n8rc\") pod \"cert-manager-webhook-687f57d79b-gwzqp\" (UID: \"f921b55b-c9e9-4183-a430-192642dc2b06\") " pod="cert-manager/cert-manager-webhook-687f57d79b-gwzqp" Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.722298 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n8rc\" (UniqueName: \"kubernetes.io/projected/f921b55b-c9e9-4183-a430-192642dc2b06-kube-api-access-8n8rc\") pod \"cert-manager-webhook-687f57d79b-gwzqp\" (UID: \"f921b55b-c9e9-4183-a430-192642dc2b06\") " pod="cert-manager/cert-manager-webhook-687f57d79b-gwzqp" Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.722377 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbbcd\" (UniqueName: \"kubernetes.io/projected/74306563-899f-44f1-b51a-e9aed7bd437c-kube-api-access-sbbcd\") pod \"cert-manager-cainjector-cf98fcc89-pwx6w\" (UID: \"74306563-899f-44f1-b51a-e9aed7bd437c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-pwx6w" Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.722404 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62xd7\" (UniqueName: \"kubernetes.io/projected/efbeff5a-c04c-47c0-8c97-338798ffc76b-kube-api-access-62xd7\") pod \"cert-manager-858654f9db-lnv49\" (UID: \"efbeff5a-c04c-47c0-8c97-338798ffc76b\") " pod="cert-manager/cert-manager-858654f9db-lnv49" Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.740783 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62xd7\" (UniqueName: \"kubernetes.io/projected/efbeff5a-c04c-47c0-8c97-338798ffc76b-kube-api-access-62xd7\") pod \"cert-manager-858654f9db-lnv49\" (UID: \"efbeff5a-c04c-47c0-8c97-338798ffc76b\") " pod="cert-manager/cert-manager-858654f9db-lnv49" Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.740827 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbbcd\" (UniqueName: \"kubernetes.io/projected/74306563-899f-44f1-b51a-e9aed7bd437c-kube-api-access-sbbcd\") pod \"cert-manager-cainjector-cf98fcc89-pwx6w\" (UID: \"74306563-899f-44f1-b51a-e9aed7bd437c\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-pwx6w" Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.740903 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n8rc\" (UniqueName: \"kubernetes.io/projected/f921b55b-c9e9-4183-a430-192642dc2b06-kube-api-access-8n8rc\") pod \"cert-manager-webhook-687f57d79b-gwzqp\" (UID: \"f921b55b-c9e9-4183-a430-192642dc2b06\") " pod="cert-manager/cert-manager-webhook-687f57d79b-gwzqp" Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.912384 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-pwx6w" Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.927397 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-lnv49" Feb 28 04:46:05 crc kubenswrapper[5014]: I0228 04:46:05.934364 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-gwzqp" Feb 28 04:46:06 crc kubenswrapper[5014]: I0228 04:46:06.182217 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56bd259d-1322-4f57-aa09-1384b22a54a9" path="/var/lib/kubelet/pods/56bd259d-1322-4f57-aa09-1384b22a54a9/volumes" Feb 28 04:46:06 crc kubenswrapper[5014]: I0228 04:46:06.186548 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-gwzqp"] Feb 28 04:46:06 crc kubenswrapper[5014]: W0228 04:46:06.194399 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf921b55b_c9e9_4183_a430_192642dc2b06.slice/crio-d7a3e074e2cead07b43e2ff46b98fc87422c17a0ebe29b9530c783c14230448e WatchSource:0}: Error finding container d7a3e074e2cead07b43e2ff46b98fc87422c17a0ebe29b9530c783c14230448e: Status 404 returned error can't find the container with id d7a3e074e2cead07b43e2ff46b98fc87422c17a0ebe29b9530c783c14230448e Feb 28 04:46:06 crc kubenswrapper[5014]: I0228 04:46:06.214822 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-pwx6w"] Feb 28 04:46:06 crc kubenswrapper[5014]: W0228 04:46:06.218767 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74306563_899f_44f1_b51a_e9aed7bd437c.slice/crio-4a186340d796709ade881a7c44624e27deca411a12102b1cc62596cfd0ea4bf2 WatchSource:0}: Error finding container 4a186340d796709ade881a7c44624e27deca411a12102b1cc62596cfd0ea4bf2: Status 404 returned error can't find the container with id 4a186340d796709ade881a7c44624e27deca411a12102b1cc62596cfd0ea4bf2 Feb 28 04:46:06 crc kubenswrapper[5014]: I0228 04:46:06.350127 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-pwx6w" event={"ID":"74306563-899f-44f1-b51a-e9aed7bd437c","Type":"ContainerStarted","Data":"4a186340d796709ade881a7c44624e27deca411a12102b1cc62596cfd0ea4bf2"} Feb 28 04:46:06 crc kubenswrapper[5014]: I0228 04:46:06.351161 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-gwzqp" event={"ID":"f921b55b-c9e9-4183-a430-192642dc2b06","Type":"ContainerStarted","Data":"d7a3e074e2cead07b43e2ff46b98fc87422c17a0ebe29b9530c783c14230448e"} Feb 28 04:46:06 crc kubenswrapper[5014]: W0228 04:46:06.458534 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podefbeff5a_c04c_47c0_8c97_338798ffc76b.slice/crio-d9efd0dd01a8ac1c8381d59474abc3e0ad4cf1f881a79ab8770c98c521d203ff WatchSource:0}: Error finding container d9efd0dd01a8ac1c8381d59474abc3e0ad4cf1f881a79ab8770c98c521d203ff: Status 404 returned error can't find the container with id d9efd0dd01a8ac1c8381d59474abc3e0ad4cf1f881a79ab8770c98c521d203ff Feb 28 04:46:06 crc kubenswrapper[5014]: I0228 04:46:06.459547 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-lnv49"] Feb 28 04:46:07 crc kubenswrapper[5014]: I0228 04:46:07.362483 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-lnv49" event={"ID":"efbeff5a-c04c-47c0-8c97-338798ffc76b","Type":"ContainerStarted","Data":"d9efd0dd01a8ac1c8381d59474abc3e0ad4cf1f881a79ab8770c98c521d203ff"} Feb 28 04:46:09 crc kubenswrapper[5014]: I0228 04:46:09.408851 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-gwzqp" event={"ID":"f921b55b-c9e9-4183-a430-192642dc2b06","Type":"ContainerStarted","Data":"5111e2f74ebc189e8f4e662d3572ccd64e4ad8da7026426d3a88143b9ed1794a"} Feb 28 04:46:09 crc kubenswrapper[5014]: I0228 04:46:09.409289 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-gwzqp" Feb 28 04:46:09 crc kubenswrapper[5014]: I0228 04:46:09.432253 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-gwzqp" podStartSLOduration=2.185245279 podStartE2EDuration="4.432134591s" podCreationTimestamp="2026-02-28 04:46:05 +0000 UTC" firstStartedPulling="2026-02-28 04:46:06.197317802 +0000 UTC m=+754.867443722" lastFinishedPulling="2026-02-28 04:46:08.444207124 +0000 UTC m=+757.114333034" observedRunningTime="2026-02-28 04:46:09.427553058 +0000 UTC m=+758.097678968" watchObservedRunningTime="2026-02-28 04:46:09.432134591 +0000 UTC m=+758.102260501" Feb 28 04:46:10 crc kubenswrapper[5014]: I0228 04:46:10.414596 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-lnv49" event={"ID":"efbeff5a-c04c-47c0-8c97-338798ffc76b","Type":"ContainerStarted","Data":"191939e40cf64624564a083df1a207b9bdd8d6f13659b7e88f8606c2f2123ca5"} Feb 28 04:46:10 crc kubenswrapper[5014]: I0228 04:46:10.416519 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-pwx6w" event={"ID":"74306563-899f-44f1-b51a-e9aed7bd437c","Type":"ContainerStarted","Data":"bbcb5741d8c11eefc41df50e3cdc82daf3c7aede35852baae030b07680b978ef"} Feb 28 04:46:10 crc kubenswrapper[5014]: I0228 04:46:10.428968 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-lnv49" podStartSLOduration=2.141836111 podStartE2EDuration="5.428945658s" podCreationTimestamp="2026-02-28 04:46:05 +0000 UTC" firstStartedPulling="2026-02-28 04:46:06.46058144 +0000 UTC m=+755.130707350" lastFinishedPulling="2026-02-28 04:46:09.747690957 +0000 UTC m=+758.417816897" observedRunningTime="2026-02-28 04:46:10.428550287 +0000 UTC m=+759.098676197" watchObservedRunningTime="2026-02-28 04:46:10.428945658 +0000 UTC m=+759.099071568" Feb 28 04:46:10 crc kubenswrapper[5014]: I0228 04:46:10.459859 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-pwx6w" podStartSLOduration=1.98470455 podStartE2EDuration="5.459824049s" podCreationTimestamp="2026-02-28 04:46:05 +0000 UTC" firstStartedPulling="2026-02-28 04:46:06.221290507 +0000 UTC m=+754.891416417" lastFinishedPulling="2026-02-28 04:46:09.696409976 +0000 UTC m=+758.366535916" observedRunningTime="2026-02-28 04:46:10.45576572 +0000 UTC m=+759.125891630" watchObservedRunningTime="2026-02-28 04:46:10.459824049 +0000 UTC m=+759.129949969" Feb 28 04:46:15 crc kubenswrapper[5014]: I0228 04:46:15.688973 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-62hnq"] Feb 28 04:46:15 crc kubenswrapper[5014]: I0228 04:46:15.690531 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="ovn-controller" containerID="cri-o://6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c" gracePeriod=30 Feb 28 04:46:15 crc kubenswrapper[5014]: I0228 04:46:15.690810 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="nbdb" containerID="cri-o://3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331" gracePeriod=30 Feb 28 04:46:15 crc kubenswrapper[5014]: I0228 04:46:15.690894 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="northd" containerID="cri-o://4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c" gracePeriod=30 Feb 28 04:46:15 crc kubenswrapper[5014]: I0228 04:46:15.690951 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="sbdb" containerID="cri-o://01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56" gracePeriod=30 Feb 28 04:46:15 crc kubenswrapper[5014]: I0228 04:46:15.690998 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="kube-rbac-proxy-node" containerID="cri-o://b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013" gracePeriod=30 Feb 28 04:46:15 crc kubenswrapper[5014]: I0228 04:46:15.691041 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="ovn-acl-logging" containerID="cri-o://6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388" gracePeriod=30 Feb 28 04:46:15 crc kubenswrapper[5014]: I0228 04:46:15.698033 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63" gracePeriod=30 Feb 28 04:46:15 crc kubenswrapper[5014]: I0228 04:46:15.736893 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="ovnkube-controller" containerID="cri-o://b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b" gracePeriod=30 Feb 28 04:46:15 crc kubenswrapper[5014]: I0228 04:46:15.937451 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-gwzqp" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.040629 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-62hnq_faa5db1f-df50-492a-9d45-d5065bdc63d2/ovnkube-controller/3.log" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.042741 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-62hnq_faa5db1f-df50-492a-9d45-d5065bdc63d2/ovn-acl-logging/0.log" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.043291 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-62hnq_faa5db1f-df50-492a-9d45-d5065bdc63d2/ovn-controller/0.log" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.043706 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092279 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-gnmjb"] Feb 28 04:46:16 crc kubenswrapper[5014]: E0228 04:46:16.092462 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="ovnkube-controller" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092473 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="ovnkube-controller" Feb 28 04:46:16 crc kubenswrapper[5014]: E0228 04:46:16.092480 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="ovnkube-controller" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092486 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="ovnkube-controller" Feb 28 04:46:16 crc kubenswrapper[5014]: E0228 04:46:16.092495 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="ovnkube-controller" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092501 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="ovnkube-controller" Feb 28 04:46:16 crc kubenswrapper[5014]: E0228 04:46:16.092510 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="northd" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092515 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="northd" Feb 28 04:46:16 crc kubenswrapper[5014]: E0228 04:46:16.092523 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="nbdb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092529 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="nbdb" Feb 28 04:46:16 crc kubenswrapper[5014]: E0228 04:46:16.092541 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="ovn-acl-logging" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092547 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="ovn-acl-logging" Feb 28 04:46:16 crc kubenswrapper[5014]: E0228 04:46:16.092554 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="sbdb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092561 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="sbdb" Feb 28 04:46:16 crc kubenswrapper[5014]: E0228 04:46:16.092570 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="ovn-controller" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092576 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="ovn-controller" Feb 28 04:46:16 crc kubenswrapper[5014]: E0228 04:46:16.092584 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="kubecfg-setup" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092591 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="kubecfg-setup" Feb 28 04:46:16 crc kubenswrapper[5014]: E0228 04:46:16.092597 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="kube-rbac-proxy-node" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092602 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="kube-rbac-proxy-node" Feb 28 04:46:16 crc kubenswrapper[5014]: E0228 04:46:16.092609 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="kube-rbac-proxy-ovn-metrics" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092614 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="kube-rbac-proxy-ovn-metrics" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092693 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="nbdb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092704 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="kube-rbac-proxy-ovn-metrics" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092714 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="ovnkube-controller" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092725 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="kube-rbac-proxy-node" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092747 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="ovn-acl-logging" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092754 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="ovnkube-controller" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092762 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="ovnkube-controller" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092768 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="sbdb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092775 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="ovn-controller" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092783 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="northd" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092790 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="ovnkube-controller" Feb 28 04:46:16 crc kubenswrapper[5014]: E0228 04:46:16.092891 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="ovnkube-controller" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092898 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="ovnkube-controller" Feb 28 04:46:16 crc kubenswrapper[5014]: E0228 04:46:16.092906 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="ovnkube-controller" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092912 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="ovnkube-controller" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.092999 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerName="ovnkube-controller" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.094441 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.189083 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp7g9\" (UniqueName: \"kubernetes.io/projected/faa5db1f-df50-492a-9d45-d5065bdc63d2-kube-api-access-vp7g9\") pod \"faa5db1f-df50-492a-9d45-d5065bdc63d2\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.189179 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-run-netns\") pod \"faa5db1f-df50-492a-9d45-d5065bdc63d2\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.189258 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "faa5db1f-df50-492a-9d45-d5065bdc63d2" (UID: "faa5db1f-df50-492a-9d45-d5065bdc63d2"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.189390 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-run-systemd\") pod \"faa5db1f-df50-492a-9d45-d5065bdc63d2\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.189418 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-run-openvswitch\") pod \"faa5db1f-df50-492a-9d45-d5065bdc63d2\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.189445 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/faa5db1f-df50-492a-9d45-d5065bdc63d2-ovnkube-script-lib\") pod \"faa5db1f-df50-492a-9d45-d5065bdc63d2\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.189469 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-node-log\") pod \"faa5db1f-df50-492a-9d45-d5065bdc63d2\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.189484 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-log-socket\") pod \"faa5db1f-df50-492a-9d45-d5065bdc63d2\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.189508 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/faa5db1f-df50-492a-9d45-d5065bdc63d2-env-overrides\") pod \"faa5db1f-df50-492a-9d45-d5065bdc63d2\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.189557 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "faa5db1f-df50-492a-9d45-d5065bdc63d2" (UID: "faa5db1f-df50-492a-9d45-d5065bdc63d2"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.189632 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-var-lib-openvswitch\") pod \"faa5db1f-df50-492a-9d45-d5065bdc63d2\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.189645 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-node-log" (OuterVolumeSpecName: "node-log") pod "faa5db1f-df50-492a-9d45-d5065bdc63d2" (UID: "faa5db1f-df50-492a-9d45-d5065bdc63d2"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.189679 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "faa5db1f-df50-492a-9d45-d5065bdc63d2" (UID: "faa5db1f-df50-492a-9d45-d5065bdc63d2"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.189725 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-systemd-units\") pod \"faa5db1f-df50-492a-9d45-d5065bdc63d2\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.189749 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "faa5db1f-df50-492a-9d45-d5065bdc63d2" (UID: "faa5db1f-df50-492a-9d45-d5065bdc63d2"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.189885 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-slash\") pod \"faa5db1f-df50-492a-9d45-d5065bdc63d2\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.189903 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faa5db1f-df50-492a-9d45-d5065bdc63d2-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "faa5db1f-df50-492a-9d45-d5065bdc63d2" (UID: "faa5db1f-df50-492a-9d45-d5065bdc63d2"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.189911 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-slash" (OuterVolumeSpecName: "host-slash") pod "faa5db1f-df50-492a-9d45-d5065bdc63d2" (UID: "faa5db1f-df50-492a-9d45-d5065bdc63d2"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.189933 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-log-socket" (OuterVolumeSpecName: "log-socket") pod "faa5db1f-df50-492a-9d45-d5065bdc63d2" (UID: "faa5db1f-df50-492a-9d45-d5065bdc63d2"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190167 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-cni-bin\") pod \"faa5db1f-df50-492a-9d45-d5065bdc63d2\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190227 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"faa5db1f-df50-492a-9d45-d5065bdc63d2\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190238 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "faa5db1f-df50-492a-9d45-d5065bdc63d2" (UID: "faa5db1f-df50-492a-9d45-d5065bdc63d2"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190266 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/faa5db1f-df50-492a-9d45-d5065bdc63d2-ovn-node-metrics-cert\") pod \"faa5db1f-df50-492a-9d45-d5065bdc63d2\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190302 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-etc-openvswitch\") pod \"faa5db1f-df50-492a-9d45-d5065bdc63d2\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190317 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "faa5db1f-df50-492a-9d45-d5065bdc63d2" (UID: "faa5db1f-df50-492a-9d45-d5065bdc63d2"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190330 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/faa5db1f-df50-492a-9d45-d5065bdc63d2-ovnkube-config\") pod \"faa5db1f-df50-492a-9d45-d5065bdc63d2\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190360 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "faa5db1f-df50-492a-9d45-d5065bdc63d2" (UID: "faa5db1f-df50-492a-9d45-d5065bdc63d2"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190371 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-run-ovn-kubernetes\") pod \"faa5db1f-df50-492a-9d45-d5065bdc63d2\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190401 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-run-ovn\") pod \"faa5db1f-df50-492a-9d45-d5065bdc63d2\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190424 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-cni-netd\") pod \"faa5db1f-df50-492a-9d45-d5065bdc63d2\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190443 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-kubelet\") pod \"faa5db1f-df50-492a-9d45-d5065bdc63d2\" (UID: \"faa5db1f-df50-492a-9d45-d5065bdc63d2\") " Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190367 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faa5db1f-df50-492a-9d45-d5065bdc63d2-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "faa5db1f-df50-492a-9d45-d5065bdc63d2" (UID: "faa5db1f-df50-492a-9d45-d5065bdc63d2"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190667 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-ovnkube-config\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190727 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-var-lib-openvswitch\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190773 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-host-cni-netd\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190801 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jd8kn\" (UniqueName: \"kubernetes.io/projected/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-kube-api-access-jd8kn\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190842 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-node-log\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190863 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-systemd-units\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190890 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-host-run-netns\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.191048 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-ovnkube-script-lib\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.191104 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-ovn-node-metrics-cert\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.191196 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-run-systemd\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.191247 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-run-openvswitch\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.191295 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-host-cni-bin\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.191337 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-host-run-ovn-kubernetes\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.191417 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-run-ovn\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.191478 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-host-kubelet\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.191520 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-log-socket\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.191544 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-env-overrides\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.191745 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.191856 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-host-slash\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.191898 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-etc-openvswitch\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190407 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "faa5db1f-df50-492a-9d45-d5065bdc63d2" (UID: "faa5db1f-df50-492a-9d45-d5065bdc63d2"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190456 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "faa5db1f-df50-492a-9d45-d5065bdc63d2" (UID: "faa5db1f-df50-492a-9d45-d5065bdc63d2"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190527 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "faa5db1f-df50-492a-9d45-d5065bdc63d2" (UID: "faa5db1f-df50-492a-9d45-d5065bdc63d2"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190777 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/faa5db1f-df50-492a-9d45-d5065bdc63d2-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "faa5db1f-df50-492a-9d45-d5065bdc63d2" (UID: "faa5db1f-df50-492a-9d45-d5065bdc63d2"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.190848 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "faa5db1f-df50-492a-9d45-d5065bdc63d2" (UID: "faa5db1f-df50-492a-9d45-d5065bdc63d2"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.192031 5014 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.192053 5014 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.192070 5014 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.192086 5014 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.192097 5014 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.192108 5014 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.192119 5014 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.192132 5014 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/faa5db1f-df50-492a-9d45-d5065bdc63d2-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.192144 5014 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-node-log\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.192155 5014 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-log-socket\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.192169 5014 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/faa5db1f-df50-492a-9d45-d5065bdc63d2-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.192181 5014 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.192193 5014 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.192205 5014 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-slash\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.195153 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/faa5db1f-df50-492a-9d45-d5065bdc63d2-kube-api-access-vp7g9" (OuterVolumeSpecName: "kube-api-access-vp7g9") pod "faa5db1f-df50-492a-9d45-d5065bdc63d2" (UID: "faa5db1f-df50-492a-9d45-d5065bdc63d2"). InnerVolumeSpecName "kube-api-access-vp7g9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.197229 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/faa5db1f-df50-492a-9d45-d5065bdc63d2-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "faa5db1f-df50-492a-9d45-d5065bdc63d2" (UID: "faa5db1f-df50-492a-9d45-d5065bdc63d2"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.212458 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "faa5db1f-df50-492a-9d45-d5065bdc63d2" (UID: "faa5db1f-df50-492a-9d45-d5065bdc63d2"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.293662 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-run-openvswitch\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.293740 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-host-cni-bin\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.293774 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-host-run-ovn-kubernetes\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.293878 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-run-ovn\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.293923 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-host-kubelet\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.293956 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-log-socket\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.293992 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-env-overrides\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.293949 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-host-cni-bin\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294010 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-host-run-ovn-kubernetes\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294027 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-log-socket\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294086 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-run-ovn\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294046 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-run-openvswitch\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.293995 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-host-kubelet\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294083 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294033 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294260 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-etc-openvswitch\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294299 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-host-slash\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294317 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-etc-openvswitch\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294366 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-host-slash\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294448 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-ovnkube-config\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294491 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-var-lib-openvswitch\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294536 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-host-cni-netd\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294573 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-node-log\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294607 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jd8kn\" (UniqueName: \"kubernetes.io/projected/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-kube-api-access-jd8kn\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294633 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-var-lib-openvswitch\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294639 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-systemd-units\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294669 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-host-cni-netd\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294681 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-systemd-units\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294690 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-ovnkube-script-lib\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294732 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-host-run-netns\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294765 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-ovn-node-metrics-cert\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294808 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-run-systemd\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294922 5014 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294947 5014 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/faa5db1f-df50-492a-9d45-d5065bdc63d2-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294974 5014 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/faa5db1f-df50-492a-9d45-d5065bdc63d2-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294980 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-env-overrides\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294999 5014 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.294834 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-host-run-netns\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.295012 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-run-systemd\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.295049 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-node-log\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.295042 5014 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/faa5db1f-df50-492a-9d45-d5065bdc63d2-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.295108 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp7g9\" (UniqueName: \"kubernetes.io/projected/faa5db1f-df50-492a-9d45-d5065bdc63d2-kube-api-access-vp7g9\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.295790 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-ovnkube-config\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.295879 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-ovnkube-script-lib\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.297982 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-ovn-node-metrics-cert\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.329975 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jd8kn\" (UniqueName: \"kubernetes.io/projected/75cfedbe-2f56-4160-b4bc-2349fdcb6bba-kube-api-access-jd8kn\") pod \"ovnkube-node-gnmjb\" (UID: \"75cfedbe-2f56-4160-b4bc-2349fdcb6bba\") " pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.408580 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:16 crc kubenswrapper[5014]: W0228 04:46:16.440511 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75cfedbe_2f56_4160_b4bc_2349fdcb6bba.slice/crio-0145ba67bee28ad04147870dec969a21d8fbc9b51efd22add174f8002f5ada68 WatchSource:0}: Error finding container 0145ba67bee28ad04147870dec969a21d8fbc9b51efd22add174f8002f5ada68: Status 404 returned error can't find the container with id 0145ba67bee28ad04147870dec969a21d8fbc9b51efd22add174f8002f5ada68 Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.463771 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8xzmq_08c35a73-dfa6-4097-beb4-3a6d4f419559/kube-multus/2.log" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.464798 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8xzmq_08c35a73-dfa6-4097-beb4-3a6d4f419559/kube-multus/1.log" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.464915 5014 generic.go:334] "Generic (PLEG): container finished" podID="08c35a73-dfa6-4097-beb4-3a6d4f419559" containerID="8ff78696065aad57b08b2613c61ae28962b3f9b9cd220106fba6bb3cf06b46a1" exitCode=2 Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.464981 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8xzmq" event={"ID":"08c35a73-dfa6-4097-beb4-3a6d4f419559","Type":"ContainerDied","Data":"8ff78696065aad57b08b2613c61ae28962b3f9b9cd220106fba6bb3cf06b46a1"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.465063 5014 scope.go:117] "RemoveContainer" containerID="46118cb340244cc019714cbd9e95c064aa047e8ce68cbb3a667a52b402ae00bb" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.465794 5014 scope.go:117] "RemoveContainer" containerID="8ff78696065aad57b08b2613c61ae28962b3f9b9cd220106fba6bb3cf06b46a1" Feb 28 04:46:16 crc kubenswrapper[5014]: E0228 04:46:16.466168 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-8xzmq_openshift-multus(08c35a73-dfa6-4097-beb4-3a6d4f419559)\"" pod="openshift-multus/multus-8xzmq" podUID="08c35a73-dfa6-4097-beb4-3a6d4f419559" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.470975 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-62hnq_faa5db1f-df50-492a-9d45-d5065bdc63d2/ovnkube-controller/3.log" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.479672 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-62hnq_faa5db1f-df50-492a-9d45-d5065bdc63d2/ovn-acl-logging/0.log" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.480392 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-62hnq_faa5db1f-df50-492a-9d45-d5065bdc63d2/ovn-controller/0.log" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.481128 5014 generic.go:334] "Generic (PLEG): container finished" podID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerID="b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b" exitCode=0 Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.481177 5014 generic.go:334] "Generic (PLEG): container finished" podID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerID="01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56" exitCode=0 Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.481187 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerDied","Data":"b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.481437 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.481529 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerDied","Data":"01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.481580 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerDied","Data":"3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.481440 5014 generic.go:334] "Generic (PLEG): container finished" podID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerID="3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331" exitCode=0 Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.481628 5014 generic.go:334] "Generic (PLEG): container finished" podID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerID="4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c" exitCode=0 Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.481648 5014 generic.go:334] "Generic (PLEG): container finished" podID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerID="2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63" exitCode=0 Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.481716 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerDied","Data":"4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.481743 5014 generic.go:334] "Generic (PLEG): container finished" podID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerID="b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013" exitCode=0 Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.481763 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerDied","Data":"2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.481782 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerDied","Data":"b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.481796 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.481820 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.481826 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.481835 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.481843 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.481850 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.481857 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.481864 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.481870 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.481876 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482037 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerDied","Data":"6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.481767 5014 generic.go:334] "Generic (PLEG): container finished" podID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerID="6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388" exitCode=143 Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482065 5014 generic.go:334] "Generic (PLEG): container finished" podID="faa5db1f-df50-492a-9d45-d5065bdc63d2" containerID="6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c" exitCode=143 Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482048 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482238 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482257 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482269 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482281 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482291 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482302 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482315 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482326 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482345 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482365 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerDied","Data":"6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482385 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482399 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482410 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482420 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482431 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482441 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482452 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482463 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482474 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482486 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482500 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-62hnq" event={"ID":"faa5db1f-df50-492a-9d45-d5065bdc63d2","Type":"ContainerDied","Data":"0c1b6d6f056d4cfc0fc21116705d96feffb9e30ec1e9a6383f4adcb16d2de01a"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482516 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482528 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482539 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482550 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482560 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482571 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482581 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482591 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482601 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.482611 5014 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.483743 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" event={"ID":"75cfedbe-2f56-4160-b4bc-2349fdcb6bba","Type":"ContainerStarted","Data":"0145ba67bee28ad04147870dec969a21d8fbc9b51efd22add174f8002f5ada68"} Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.533362 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-62hnq"] Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.535784 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-62hnq"] Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.889671 5014 scope.go:117] "RemoveContainer" containerID="b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.910539 5014 scope.go:117] "RemoveContainer" containerID="f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.940595 5014 scope.go:117] "RemoveContainer" containerID="01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56" Feb 28 04:46:16 crc kubenswrapper[5014]: I0228 04:46:16.965333 5014 scope.go:117] "RemoveContainer" containerID="3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.033188 5014 scope.go:117] "RemoveContainer" containerID="4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.050936 5014 scope.go:117] "RemoveContainer" containerID="2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.074142 5014 scope.go:117] "RemoveContainer" containerID="b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.095853 5014 scope.go:117] "RemoveContainer" containerID="6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.118107 5014 scope.go:117] "RemoveContainer" containerID="6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.137368 5014 scope.go:117] "RemoveContainer" containerID="d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.155268 5014 scope.go:117] "RemoveContainer" containerID="b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b" Feb 28 04:46:17 crc kubenswrapper[5014]: E0228 04:46:17.155922 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b\": container with ID starting with b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b not found: ID does not exist" containerID="b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.155982 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b"} err="failed to get container status \"b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b\": rpc error: code = NotFound desc = could not find container \"b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b\": container with ID starting with b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.156016 5014 scope.go:117] "RemoveContainer" containerID="f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328" Feb 28 04:46:17 crc kubenswrapper[5014]: E0228 04:46:17.156493 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328\": container with ID starting with f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328 not found: ID does not exist" containerID="f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.156637 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328"} err="failed to get container status \"f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328\": rpc error: code = NotFound desc = could not find container \"f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328\": container with ID starting with f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.156733 5014 scope.go:117] "RemoveContainer" containerID="01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56" Feb 28 04:46:17 crc kubenswrapper[5014]: E0228 04:46:17.157112 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\": container with ID starting with 01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56 not found: ID does not exist" containerID="01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.157140 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56"} err="failed to get container status \"01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\": rpc error: code = NotFound desc = could not find container \"01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\": container with ID starting with 01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.157159 5014 scope.go:117] "RemoveContainer" containerID="3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331" Feb 28 04:46:17 crc kubenswrapper[5014]: E0228 04:46:17.157514 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\": container with ID starting with 3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331 not found: ID does not exist" containerID="3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.157550 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331"} err="failed to get container status \"3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\": rpc error: code = NotFound desc = could not find container \"3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\": container with ID starting with 3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.157571 5014 scope.go:117] "RemoveContainer" containerID="4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c" Feb 28 04:46:17 crc kubenswrapper[5014]: E0228 04:46:17.157888 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\": container with ID starting with 4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c not found: ID does not exist" containerID="4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.157999 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c"} err="failed to get container status \"4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\": rpc error: code = NotFound desc = could not find container \"4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\": container with ID starting with 4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.158083 5014 scope.go:117] "RemoveContainer" containerID="2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63" Feb 28 04:46:17 crc kubenswrapper[5014]: E0228 04:46:17.158586 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\": container with ID starting with 2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63 not found: ID does not exist" containerID="2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.158617 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63"} err="failed to get container status \"2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\": rpc error: code = NotFound desc = could not find container \"2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\": container with ID starting with 2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.158635 5014 scope.go:117] "RemoveContainer" containerID="b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013" Feb 28 04:46:17 crc kubenswrapper[5014]: E0228 04:46:17.158936 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\": container with ID starting with b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013 not found: ID does not exist" containerID="b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.158971 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013"} err="failed to get container status \"b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\": rpc error: code = NotFound desc = could not find container \"b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\": container with ID starting with b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.158991 5014 scope.go:117] "RemoveContainer" containerID="6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388" Feb 28 04:46:17 crc kubenswrapper[5014]: E0228 04:46:17.159361 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\": container with ID starting with 6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388 not found: ID does not exist" containerID="6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.159388 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388"} err="failed to get container status \"6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\": rpc error: code = NotFound desc = could not find container \"6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\": container with ID starting with 6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.159408 5014 scope.go:117] "RemoveContainer" containerID="6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c" Feb 28 04:46:17 crc kubenswrapper[5014]: E0228 04:46:17.159665 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\": container with ID starting with 6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c not found: ID does not exist" containerID="6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.159760 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c"} err="failed to get container status \"6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\": rpc error: code = NotFound desc = could not find container \"6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\": container with ID starting with 6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.159879 5014 scope.go:117] "RemoveContainer" containerID="d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0" Feb 28 04:46:17 crc kubenswrapper[5014]: E0228 04:46:17.160284 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\": container with ID starting with d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0 not found: ID does not exist" containerID="d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.160377 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0"} err="failed to get container status \"d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\": rpc error: code = NotFound desc = could not find container \"d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\": container with ID starting with d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.160458 5014 scope.go:117] "RemoveContainer" containerID="b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.160857 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b"} err="failed to get container status \"b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b\": rpc error: code = NotFound desc = could not find container \"b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b\": container with ID starting with b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.160971 5014 scope.go:117] "RemoveContainer" containerID="f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.161400 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328"} err="failed to get container status \"f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328\": rpc error: code = NotFound desc = could not find container \"f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328\": container with ID starting with f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.161424 5014 scope.go:117] "RemoveContainer" containerID="01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.161668 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56"} err="failed to get container status \"01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\": rpc error: code = NotFound desc = could not find container \"01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\": container with ID starting with 01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.161690 5014 scope.go:117] "RemoveContainer" containerID="3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.162021 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331"} err="failed to get container status \"3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\": rpc error: code = NotFound desc = could not find container \"3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\": container with ID starting with 3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.162067 5014 scope.go:117] "RemoveContainer" containerID="4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.162509 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c"} err="failed to get container status \"4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\": rpc error: code = NotFound desc = could not find container \"4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\": container with ID starting with 4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.162534 5014 scope.go:117] "RemoveContainer" containerID="2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.162901 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63"} err="failed to get container status \"2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\": rpc error: code = NotFound desc = could not find container \"2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\": container with ID starting with 2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.162933 5014 scope.go:117] "RemoveContainer" containerID="b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.163273 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013"} err="failed to get container status \"b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\": rpc error: code = NotFound desc = could not find container \"b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\": container with ID starting with b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.163296 5014 scope.go:117] "RemoveContainer" containerID="6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.163608 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388"} err="failed to get container status \"6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\": rpc error: code = NotFound desc = could not find container \"6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\": container with ID starting with 6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.163663 5014 scope.go:117] "RemoveContainer" containerID="6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.164026 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c"} err="failed to get container status \"6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\": rpc error: code = NotFound desc = could not find container \"6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\": container with ID starting with 6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.164050 5014 scope.go:117] "RemoveContainer" containerID="d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.164348 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0"} err="failed to get container status \"d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\": rpc error: code = NotFound desc = could not find container \"d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\": container with ID starting with d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.164459 5014 scope.go:117] "RemoveContainer" containerID="b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.164938 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b"} err="failed to get container status \"b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b\": rpc error: code = NotFound desc = could not find container \"b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b\": container with ID starting with b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.165038 5014 scope.go:117] "RemoveContainer" containerID="f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.165437 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328"} err="failed to get container status \"f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328\": rpc error: code = NotFound desc = could not find container \"f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328\": container with ID starting with f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.165480 5014 scope.go:117] "RemoveContainer" containerID="01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.165919 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56"} err="failed to get container status \"01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\": rpc error: code = NotFound desc = could not find container \"01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\": container with ID starting with 01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.166050 5014 scope.go:117] "RemoveContainer" containerID="3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.166396 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331"} err="failed to get container status \"3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\": rpc error: code = NotFound desc = could not find container \"3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\": container with ID starting with 3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.166419 5014 scope.go:117] "RemoveContainer" containerID="4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.166723 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c"} err="failed to get container status \"4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\": rpc error: code = NotFound desc = could not find container \"4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\": container with ID starting with 4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.166751 5014 scope.go:117] "RemoveContainer" containerID="2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.167004 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63"} err="failed to get container status \"2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\": rpc error: code = NotFound desc = could not find container \"2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\": container with ID starting with 2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.167035 5014 scope.go:117] "RemoveContainer" containerID="b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.167315 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013"} err="failed to get container status \"b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\": rpc error: code = NotFound desc = could not find container \"b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\": container with ID starting with b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.167339 5014 scope.go:117] "RemoveContainer" containerID="6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.167592 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388"} err="failed to get container status \"6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\": rpc error: code = NotFound desc = could not find container \"6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\": container with ID starting with 6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.167684 5014 scope.go:117] "RemoveContainer" containerID="6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.168026 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c"} err="failed to get container status \"6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\": rpc error: code = NotFound desc = could not find container \"6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\": container with ID starting with 6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.168049 5014 scope.go:117] "RemoveContainer" containerID="d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.168294 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0"} err="failed to get container status \"d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\": rpc error: code = NotFound desc = could not find container \"d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\": container with ID starting with d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.168387 5014 scope.go:117] "RemoveContainer" containerID="b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.168937 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b"} err="failed to get container status \"b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b\": rpc error: code = NotFound desc = could not find container \"b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b\": container with ID starting with b8168a4906c3c8c4ad5a434974d08db3f533dfe3dacced9fb6f2e64954b84a4b not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.169006 5014 scope.go:117] "RemoveContainer" containerID="f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.169392 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328"} err="failed to get container status \"f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328\": rpc error: code = NotFound desc = could not find container \"f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328\": container with ID starting with f0e3fa55c0b8cf80281500a3e6df7913c53b9051924ef0bd58f17bea0f76f328 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.169522 5014 scope.go:117] "RemoveContainer" containerID="01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.169881 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56"} err="failed to get container status \"01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\": rpc error: code = NotFound desc = could not find container \"01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56\": container with ID starting with 01d180cdc81361c9480744b871b413f34f1d3ce2d66a0b1cc444a483d646ae56 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.169918 5014 scope.go:117] "RemoveContainer" containerID="3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.170203 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331"} err="failed to get container status \"3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\": rpc error: code = NotFound desc = could not find container \"3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331\": container with ID starting with 3fd5f51e58fcf65010f7ebe85742e182aa69d092ac08e7d8cc443247b0f19331 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.170317 5014 scope.go:117] "RemoveContainer" containerID="4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.170742 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c"} err="failed to get container status \"4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\": rpc error: code = NotFound desc = could not find container \"4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c\": container with ID starting with 4d4b024ee0c4a2b9c1e7a9c739cbe8d2b15c0c602da3b50a532c590c84bd293c not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.170857 5014 scope.go:117] "RemoveContainer" containerID="2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.171239 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63"} err="failed to get container status \"2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\": rpc error: code = NotFound desc = could not find container \"2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63\": container with ID starting with 2355236021b1cef5b9a2d25c87fe0b592d5500951ab3fadb7f10cb8d1a752c63 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.171269 5014 scope.go:117] "RemoveContainer" containerID="b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.171653 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013"} err="failed to get container status \"b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\": rpc error: code = NotFound desc = could not find container \"b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013\": container with ID starting with b86291f72945a2b1e3710510a8725600e068c8812be3ca63886922dfa56f2013 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.171678 5014 scope.go:117] "RemoveContainer" containerID="6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.172003 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388"} err="failed to get container status \"6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\": rpc error: code = NotFound desc = could not find container \"6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388\": container with ID starting with 6cca0597f2948ef8019aea1bb3a13c48665dc11d0efe132f4e23acd277722388 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.172032 5014 scope.go:117] "RemoveContainer" containerID="6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.172362 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c"} err="failed to get container status \"6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\": rpc error: code = NotFound desc = could not find container \"6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c\": container with ID starting with 6818702b1565bcfb734eb4b14a1a8aac52116029e0ed264fcf0cea3efe51327c not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.172457 5014 scope.go:117] "RemoveContainer" containerID="d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.172734 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0"} err="failed to get container status \"d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\": rpc error: code = NotFound desc = could not find container \"d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0\": container with ID starting with d27b5bd60a30e4cd26d165f5aac220c2de1e71dd180ae1f887d79353da28e9f0 not found: ID does not exist" Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.492562 5014 generic.go:334] "Generic (PLEG): container finished" podID="75cfedbe-2f56-4160-b4bc-2349fdcb6bba" containerID="5ed19aa5c4e5e916212d120824241fe49cb193ca4d0f0f9bb4a433cd093bd304" exitCode=0 Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.492653 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" event={"ID":"75cfedbe-2f56-4160-b4bc-2349fdcb6bba","Type":"ContainerDied","Data":"5ed19aa5c4e5e916212d120824241fe49cb193ca4d0f0f9bb4a433cd093bd304"} Feb 28 04:46:17 crc kubenswrapper[5014]: I0228 04:46:17.495617 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8xzmq_08c35a73-dfa6-4097-beb4-3a6d4f419559/kube-multus/2.log" Feb 28 04:46:18 crc kubenswrapper[5014]: I0228 04:46:18.182084 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="faa5db1f-df50-492a-9d45-d5065bdc63d2" path="/var/lib/kubelet/pods/faa5db1f-df50-492a-9d45-d5065bdc63d2/volumes" Feb 28 04:46:18 crc kubenswrapper[5014]: I0228 04:46:18.505419 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" event={"ID":"75cfedbe-2f56-4160-b4bc-2349fdcb6bba","Type":"ContainerStarted","Data":"e23d9748296994af96f6d0c56832ecb58bf99cc2b62b9aee27294cdad46ed9f8"} Feb 28 04:46:18 crc kubenswrapper[5014]: I0228 04:46:18.505714 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" event={"ID":"75cfedbe-2f56-4160-b4bc-2349fdcb6bba","Type":"ContainerStarted","Data":"73330c860ae274412272393be052046dbc9acfe734258f992426e12db2b1c4c7"} Feb 28 04:46:18 crc kubenswrapper[5014]: I0228 04:46:18.505725 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" event={"ID":"75cfedbe-2f56-4160-b4bc-2349fdcb6bba","Type":"ContainerStarted","Data":"55dbc3c7c12e94ab2b211875158d564e07969b86ef9a10c7d590c4b7ace0c46e"} Feb 28 04:46:18 crc kubenswrapper[5014]: I0228 04:46:18.505733 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" event={"ID":"75cfedbe-2f56-4160-b4bc-2349fdcb6bba","Type":"ContainerStarted","Data":"78e7025639b4986771a8ff0bbec92709da9d341989007c82dda277f357991154"} Feb 28 04:46:18 crc kubenswrapper[5014]: I0228 04:46:18.505743 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" event={"ID":"75cfedbe-2f56-4160-b4bc-2349fdcb6bba","Type":"ContainerStarted","Data":"15b129de9b1ac3a750ca2b0ee66c4bb05a5a42e2dc1fae674d7f02d1ec729e86"} Feb 28 04:46:18 crc kubenswrapper[5014]: I0228 04:46:18.505751 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" event={"ID":"75cfedbe-2f56-4160-b4bc-2349fdcb6bba","Type":"ContainerStarted","Data":"02b22102b5f4b3339d1d4b10ed40402aab4cd906421a6ab3884e71277288f1af"} Feb 28 04:46:20 crc kubenswrapper[5014]: I0228 04:46:20.522205 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" event={"ID":"75cfedbe-2f56-4160-b4bc-2349fdcb6bba","Type":"ContainerStarted","Data":"07bb515ec755dba9856a83c2d9a07b53a02a41d5aae950a2fcfc44bc051e11da"} Feb 28 04:46:23 crc kubenswrapper[5014]: I0228 04:46:23.549534 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" event={"ID":"75cfedbe-2f56-4160-b4bc-2349fdcb6bba","Type":"ContainerStarted","Data":"a8cb91eb397ee18e0986352085bbe63f71757b582061223d295b56805da384e4"} Feb 28 04:46:23 crc kubenswrapper[5014]: I0228 04:46:23.550180 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:23 crc kubenswrapper[5014]: I0228 04:46:23.550202 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:23 crc kubenswrapper[5014]: I0228 04:46:23.550215 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:23 crc kubenswrapper[5014]: I0228 04:46:23.580505 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:23 crc kubenswrapper[5014]: I0228 04:46:23.583728 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:23 crc kubenswrapper[5014]: I0228 04:46:23.594502 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" podStartSLOduration=7.594473158 podStartE2EDuration="7.594473158s" podCreationTimestamp="2026-02-28 04:46:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:46:23.588064686 +0000 UTC m=+772.258190636" watchObservedRunningTime="2026-02-28 04:46:23.594473158 +0000 UTC m=+772.264599108" Feb 28 04:46:28 crc kubenswrapper[5014]: I0228 04:46:28.171654 5014 scope.go:117] "RemoveContainer" containerID="8ff78696065aad57b08b2613c61ae28962b3f9b9cd220106fba6bb3cf06b46a1" Feb 28 04:46:28 crc kubenswrapper[5014]: E0228 04:46:28.172382 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-8xzmq_openshift-multus(08c35a73-dfa6-4097-beb4-3a6d4f419559)\"" pod="openshift-multus/multus-8xzmq" podUID="08c35a73-dfa6-4097-beb4-3a6d4f419559" Feb 28 04:46:32 crc kubenswrapper[5014]: I0228 04:46:32.691441 5014 scope.go:117] "RemoveContainer" containerID="f35d3e277d6f66460b6e9019dae8498ea93fff9c90babcf14830e0335f65c0b6" Feb 28 04:46:41 crc kubenswrapper[5014]: I0228 04:46:41.172435 5014 scope.go:117] "RemoveContainer" containerID="8ff78696065aad57b08b2613c61ae28962b3f9b9cd220106fba6bb3cf06b46a1" Feb 28 04:46:41 crc kubenswrapper[5014]: I0228 04:46:41.664407 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-8xzmq_08c35a73-dfa6-4097-beb4-3a6d4f419559/kube-multus/2.log" Feb 28 04:46:41 crc kubenswrapper[5014]: I0228 04:46:41.664980 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-8xzmq" event={"ID":"08c35a73-dfa6-4097-beb4-3a6d4f419559","Type":"ContainerStarted","Data":"c315177a94258fe1bc46aca82614c78b5a54947d8c2b90c305585e46c74a42de"} Feb 28 04:46:45 crc kubenswrapper[5014]: I0228 04:46:45.706570 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:46:45 crc kubenswrapper[5014]: I0228 04:46:45.708035 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:46:46 crc kubenswrapper[5014]: I0228 04:46:46.445052 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-gnmjb" Feb 28 04:46:51 crc kubenswrapper[5014]: I0228 04:46:51.213091 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh"] Feb 28 04:46:51 crc kubenswrapper[5014]: I0228 04:46:51.217120 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh" Feb 28 04:46:51 crc kubenswrapper[5014]: I0228 04:46:51.219681 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 28 04:46:51 crc kubenswrapper[5014]: I0228 04:46:51.222223 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh"] Feb 28 04:46:51 crc kubenswrapper[5014]: I0228 04:46:51.380216 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssx5k\" (UniqueName: \"kubernetes.io/projected/f0a92225-1e40-4c6b-af69-652221b1273a-kube-api-access-ssx5k\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh\" (UID: \"f0a92225-1e40-4c6b-af69-652221b1273a\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh" Feb 28 04:46:51 crc kubenswrapper[5014]: I0228 04:46:51.380262 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0a92225-1e40-4c6b-af69-652221b1273a-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh\" (UID: \"f0a92225-1e40-4c6b-af69-652221b1273a\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh" Feb 28 04:46:51 crc kubenswrapper[5014]: I0228 04:46:51.380291 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0a92225-1e40-4c6b-af69-652221b1273a-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh\" (UID: \"f0a92225-1e40-4c6b-af69-652221b1273a\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh" Feb 28 04:46:51 crc kubenswrapper[5014]: I0228 04:46:51.482312 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssx5k\" (UniqueName: \"kubernetes.io/projected/f0a92225-1e40-4c6b-af69-652221b1273a-kube-api-access-ssx5k\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh\" (UID: \"f0a92225-1e40-4c6b-af69-652221b1273a\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh" Feb 28 04:46:51 crc kubenswrapper[5014]: I0228 04:46:51.482728 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0a92225-1e40-4c6b-af69-652221b1273a-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh\" (UID: \"f0a92225-1e40-4c6b-af69-652221b1273a\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh" Feb 28 04:46:51 crc kubenswrapper[5014]: I0228 04:46:51.482938 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0a92225-1e40-4c6b-af69-652221b1273a-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh\" (UID: \"f0a92225-1e40-4c6b-af69-652221b1273a\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh" Feb 28 04:46:51 crc kubenswrapper[5014]: I0228 04:46:51.483586 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0a92225-1e40-4c6b-af69-652221b1273a-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh\" (UID: \"f0a92225-1e40-4c6b-af69-652221b1273a\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh" Feb 28 04:46:51 crc kubenswrapper[5014]: I0228 04:46:51.483581 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0a92225-1e40-4c6b-af69-652221b1273a-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh\" (UID: \"f0a92225-1e40-4c6b-af69-652221b1273a\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh" Feb 28 04:46:51 crc kubenswrapper[5014]: I0228 04:46:51.517481 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssx5k\" (UniqueName: \"kubernetes.io/projected/f0a92225-1e40-4c6b-af69-652221b1273a-kube-api-access-ssx5k\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh\" (UID: \"f0a92225-1e40-4c6b-af69-652221b1273a\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh" Feb 28 04:46:51 crc kubenswrapper[5014]: I0228 04:46:51.537697 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh" Feb 28 04:46:51 crc kubenswrapper[5014]: I0228 04:46:51.777139 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh"] Feb 28 04:46:52 crc kubenswrapper[5014]: I0228 04:46:52.733550 5014 generic.go:334] "Generic (PLEG): container finished" podID="f0a92225-1e40-4c6b-af69-652221b1273a" containerID="0b08850ec86fef94192954e653539c1bd5ef52786a0a2cce362fd9194b82d7f7" exitCode=0 Feb 28 04:46:52 crc kubenswrapper[5014]: I0228 04:46:52.733664 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh" event={"ID":"f0a92225-1e40-4c6b-af69-652221b1273a","Type":"ContainerDied","Data":"0b08850ec86fef94192954e653539c1bd5ef52786a0a2cce362fd9194b82d7f7"} Feb 28 04:46:52 crc kubenswrapper[5014]: I0228 04:46:52.734231 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh" event={"ID":"f0a92225-1e40-4c6b-af69-652221b1273a","Type":"ContainerStarted","Data":"bff22c1e206dcd36479a2c16da12dfd3f31e768c14eada974d0f30d3ab2eadc0"} Feb 28 04:46:55 crc kubenswrapper[5014]: I0228 04:46:55.765458 5014 generic.go:334] "Generic (PLEG): container finished" podID="f0a92225-1e40-4c6b-af69-652221b1273a" containerID="0f3a1f0c2bd2dece7891a51be28021e6dbf07f50c7a31193262254bbb10f9798" exitCode=0 Feb 28 04:46:55 crc kubenswrapper[5014]: I0228 04:46:55.765524 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh" event={"ID":"f0a92225-1e40-4c6b-af69-652221b1273a","Type":"ContainerDied","Data":"0f3a1f0c2bd2dece7891a51be28021e6dbf07f50c7a31193262254bbb10f9798"} Feb 28 04:46:56 crc kubenswrapper[5014]: I0228 04:46:56.776067 5014 generic.go:334] "Generic (PLEG): container finished" podID="f0a92225-1e40-4c6b-af69-652221b1273a" containerID="e737f37d07dd5b381136533471a962c7fecc6fa3be44502efa2e0e2f575c21f8" exitCode=0 Feb 28 04:46:56 crc kubenswrapper[5014]: I0228 04:46:56.776127 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh" event={"ID":"f0a92225-1e40-4c6b-af69-652221b1273a","Type":"ContainerDied","Data":"e737f37d07dd5b381136533471a962c7fecc6fa3be44502efa2e0e2f575c21f8"} Feb 28 04:46:58 crc kubenswrapper[5014]: I0228 04:46:58.069120 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh" Feb 28 04:46:58 crc kubenswrapper[5014]: I0228 04:46:58.074269 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssx5k\" (UniqueName: \"kubernetes.io/projected/f0a92225-1e40-4c6b-af69-652221b1273a-kube-api-access-ssx5k\") pod \"f0a92225-1e40-4c6b-af69-652221b1273a\" (UID: \"f0a92225-1e40-4c6b-af69-652221b1273a\") " Feb 28 04:46:58 crc kubenswrapper[5014]: I0228 04:46:58.074318 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0a92225-1e40-4c6b-af69-652221b1273a-util\") pod \"f0a92225-1e40-4c6b-af69-652221b1273a\" (UID: \"f0a92225-1e40-4c6b-af69-652221b1273a\") " Feb 28 04:46:58 crc kubenswrapper[5014]: I0228 04:46:58.074357 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0a92225-1e40-4c6b-af69-652221b1273a-bundle\") pod \"f0a92225-1e40-4c6b-af69-652221b1273a\" (UID: \"f0a92225-1e40-4c6b-af69-652221b1273a\") " Feb 28 04:46:58 crc kubenswrapper[5014]: I0228 04:46:58.075889 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0a92225-1e40-4c6b-af69-652221b1273a-bundle" (OuterVolumeSpecName: "bundle") pod "f0a92225-1e40-4c6b-af69-652221b1273a" (UID: "f0a92225-1e40-4c6b-af69-652221b1273a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:46:58 crc kubenswrapper[5014]: I0228 04:46:58.082799 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0a92225-1e40-4c6b-af69-652221b1273a-kube-api-access-ssx5k" (OuterVolumeSpecName: "kube-api-access-ssx5k") pod "f0a92225-1e40-4c6b-af69-652221b1273a" (UID: "f0a92225-1e40-4c6b-af69-652221b1273a"). InnerVolumeSpecName "kube-api-access-ssx5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:46:58 crc kubenswrapper[5014]: I0228 04:46:58.175711 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssx5k\" (UniqueName: \"kubernetes.io/projected/f0a92225-1e40-4c6b-af69-652221b1273a-kube-api-access-ssx5k\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:58 crc kubenswrapper[5014]: I0228 04:46:58.175752 5014 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f0a92225-1e40-4c6b-af69-652221b1273a-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:58 crc kubenswrapper[5014]: I0228 04:46:58.397252 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0a92225-1e40-4c6b-af69-652221b1273a-util" (OuterVolumeSpecName: "util") pod "f0a92225-1e40-4c6b-af69-652221b1273a" (UID: "f0a92225-1e40-4c6b-af69-652221b1273a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:46:58 crc kubenswrapper[5014]: I0228 04:46:58.480387 5014 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f0a92225-1e40-4c6b-af69-652221b1273a-util\") on node \"crc\" DevicePath \"\"" Feb 28 04:46:58 crc kubenswrapper[5014]: I0228 04:46:58.792174 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh" event={"ID":"f0a92225-1e40-4c6b-af69-652221b1273a","Type":"ContainerDied","Data":"bff22c1e206dcd36479a2c16da12dfd3f31e768c14eada974d0f30d3ab2eadc0"} Feb 28 04:46:58 crc kubenswrapper[5014]: I0228 04:46:58.792229 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh" Feb 28 04:46:58 crc kubenswrapper[5014]: I0228 04:46:58.792243 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bff22c1e206dcd36479a2c16da12dfd3f31e768c14eada974d0f30d3ab2eadc0" Feb 28 04:47:02 crc kubenswrapper[5014]: I0228 04:47:02.610107 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-hdp54"] Feb 28 04:47:02 crc kubenswrapper[5014]: E0228 04:47:02.610586 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0a92225-1e40-4c6b-af69-652221b1273a" containerName="util" Feb 28 04:47:02 crc kubenswrapper[5014]: I0228 04:47:02.610598 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0a92225-1e40-4c6b-af69-652221b1273a" containerName="util" Feb 28 04:47:02 crc kubenswrapper[5014]: E0228 04:47:02.610607 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0a92225-1e40-4c6b-af69-652221b1273a" containerName="extract" Feb 28 04:47:02 crc kubenswrapper[5014]: I0228 04:47:02.610613 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0a92225-1e40-4c6b-af69-652221b1273a" containerName="extract" Feb 28 04:47:02 crc kubenswrapper[5014]: E0228 04:47:02.610627 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0a92225-1e40-4c6b-af69-652221b1273a" containerName="pull" Feb 28 04:47:02 crc kubenswrapper[5014]: I0228 04:47:02.610632 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0a92225-1e40-4c6b-af69-652221b1273a" containerName="pull" Feb 28 04:47:02 crc kubenswrapper[5014]: I0228 04:47:02.610716 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0a92225-1e40-4c6b-af69-652221b1273a" containerName="extract" Feb 28 04:47:02 crc kubenswrapper[5014]: I0228 04:47:02.611087 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-hdp54" Feb 28 04:47:02 crc kubenswrapper[5014]: I0228 04:47:02.613263 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 28 04:47:02 crc kubenswrapper[5014]: I0228 04:47:02.613355 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-fx9cd" Feb 28 04:47:02 crc kubenswrapper[5014]: I0228 04:47:02.614787 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 28 04:47:02 crc kubenswrapper[5014]: I0228 04:47:02.634687 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-hdp54"] Feb 28 04:47:02 crc kubenswrapper[5014]: I0228 04:47:02.738055 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwgv7\" (UniqueName: \"kubernetes.io/projected/1a5c4be4-d285-425e-bd4b-26cbf4d48b0e-kube-api-access-kwgv7\") pod \"nmstate-operator-75c5dccd6c-hdp54\" (UID: \"1a5c4be4-d285-425e-bd4b-26cbf4d48b0e\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-hdp54" Feb 28 04:47:02 crc kubenswrapper[5014]: I0228 04:47:02.839167 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwgv7\" (UniqueName: \"kubernetes.io/projected/1a5c4be4-d285-425e-bd4b-26cbf4d48b0e-kube-api-access-kwgv7\") pod \"nmstate-operator-75c5dccd6c-hdp54\" (UID: \"1a5c4be4-d285-425e-bd4b-26cbf4d48b0e\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-hdp54" Feb 28 04:47:02 crc kubenswrapper[5014]: I0228 04:47:02.855899 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwgv7\" (UniqueName: \"kubernetes.io/projected/1a5c4be4-d285-425e-bd4b-26cbf4d48b0e-kube-api-access-kwgv7\") pod \"nmstate-operator-75c5dccd6c-hdp54\" (UID: \"1a5c4be4-d285-425e-bd4b-26cbf4d48b0e\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-hdp54" Feb 28 04:47:02 crc kubenswrapper[5014]: I0228 04:47:02.933852 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-hdp54" Feb 28 04:47:03 crc kubenswrapper[5014]: I0228 04:47:03.385771 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-hdp54"] Feb 28 04:47:03 crc kubenswrapper[5014]: I0228 04:47:03.833131 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-hdp54" event={"ID":"1a5c4be4-d285-425e-bd4b-26cbf4d48b0e","Type":"ContainerStarted","Data":"58286aab3de98cbc05832e12bbb3dcbfdaa89139632d1e1c1940416591f43114"} Feb 28 04:47:06 crc kubenswrapper[5014]: I0228 04:47:06.857732 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-hdp54" event={"ID":"1a5c4be4-d285-425e-bd4b-26cbf4d48b0e","Type":"ContainerStarted","Data":"b591f82fa7fffc69748b1db510cab9c9042f4a26510336cc25eed9fe7fcb9259"} Feb 28 04:47:06 crc kubenswrapper[5014]: I0228 04:47:06.885310 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-hdp54" podStartSLOduration=1.9895982920000002 podStartE2EDuration="4.885292442s" podCreationTimestamp="2026-02-28 04:47:02 +0000 UTC" firstStartedPulling="2026-02-28 04:47:03.392742584 +0000 UTC m=+812.062868494" lastFinishedPulling="2026-02-28 04:47:06.288436694 +0000 UTC m=+814.958562644" observedRunningTime="2026-02-28 04:47:06.884677806 +0000 UTC m=+815.554803756" watchObservedRunningTime="2026-02-28 04:47:06.885292442 +0000 UTC m=+815.555418342" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.347426 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-qktq9"] Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.349139 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-qktq9" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.355378 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-2rd2h" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.358317 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-qktq9"] Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.393504 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-lpxlh"] Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.394378 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-lpxlh" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.401134 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.416499 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-lpxlh"] Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.432656 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-qn5jv"] Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.433531 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-qn5jv" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.459774 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qkwc\" (UniqueName: \"kubernetes.io/projected/72580a24-d267-4917-955f-639fb9600a27-kube-api-access-6qkwc\") pod \"nmstate-metrics-69594cc75-qktq9\" (UID: \"72580a24-d267-4917-955f-639fb9600a27\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-qktq9" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.515732 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-9dtw6"] Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.516671 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-9dtw6" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.519082 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.519327 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-zlvpd" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.519493 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.524534 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-9dtw6"] Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.560910 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/057e43b5-a9ff-43d5-9f75-e9add271d1a6-nmstate-lock\") pod \"nmstate-handler-qn5jv\" (UID: \"057e43b5-a9ff-43d5-9f75-e9add271d1a6\") " pod="openshift-nmstate/nmstate-handler-qn5jv" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.560956 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/057e43b5-a9ff-43d5-9f75-e9add271d1a6-ovs-socket\") pod \"nmstate-handler-qn5jv\" (UID: \"057e43b5-a9ff-43d5-9f75-e9add271d1a6\") " pod="openshift-nmstate/nmstate-handler-qn5jv" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.561004 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7116c3b6-8ec4-42af-9739-9c4b1ea6e7c6-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-lpxlh\" (UID: \"7116c3b6-8ec4-42af-9739-9c4b1ea6e7c6\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-lpxlh" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.561024 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8xq4\" (UniqueName: \"kubernetes.io/projected/057e43b5-a9ff-43d5-9f75-e9add271d1a6-kube-api-access-r8xq4\") pod \"nmstate-handler-qn5jv\" (UID: \"057e43b5-a9ff-43d5-9f75-e9add271d1a6\") " pod="openshift-nmstate/nmstate-handler-qn5jv" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.561044 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/057e43b5-a9ff-43d5-9f75-e9add271d1a6-dbus-socket\") pod \"nmstate-handler-qn5jv\" (UID: \"057e43b5-a9ff-43d5-9f75-e9add271d1a6\") " pod="openshift-nmstate/nmstate-handler-qn5jv" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.561117 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/77ea3bfd-fad5-4789-8930-d7b7148453b2-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-9dtw6\" (UID: \"77ea3bfd-fad5-4789-8930-d7b7148453b2\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-9dtw6" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.561182 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qkwc\" (UniqueName: \"kubernetes.io/projected/72580a24-d267-4917-955f-639fb9600a27-kube-api-access-6qkwc\") pod \"nmstate-metrics-69594cc75-qktq9\" (UID: \"72580a24-d267-4917-955f-639fb9600a27\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-qktq9" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.561227 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jrwv\" (UniqueName: \"kubernetes.io/projected/7116c3b6-8ec4-42af-9739-9c4b1ea6e7c6-kube-api-access-6jrwv\") pod \"nmstate-webhook-786f45cff4-lpxlh\" (UID: \"7116c3b6-8ec4-42af-9739-9c4b1ea6e7c6\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-lpxlh" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.561260 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk46x\" (UniqueName: \"kubernetes.io/projected/77ea3bfd-fad5-4789-8930-d7b7148453b2-kube-api-access-fk46x\") pod \"nmstate-console-plugin-5dcbbd79cf-9dtw6\" (UID: \"77ea3bfd-fad5-4789-8930-d7b7148453b2\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-9dtw6" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.561291 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/77ea3bfd-fad5-4789-8930-d7b7148453b2-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-9dtw6\" (UID: \"77ea3bfd-fad5-4789-8930-d7b7148453b2\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-9dtw6" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.585870 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qkwc\" (UniqueName: \"kubernetes.io/projected/72580a24-d267-4917-955f-639fb9600a27-kube-api-access-6qkwc\") pod \"nmstate-metrics-69594cc75-qktq9\" (UID: \"72580a24-d267-4917-955f-639fb9600a27\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-qktq9" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.662873 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/77ea3bfd-fad5-4789-8930-d7b7148453b2-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-9dtw6\" (UID: \"77ea3bfd-fad5-4789-8930-d7b7148453b2\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-9dtw6" Feb 28 04:47:11 crc kubenswrapper[5014]: E0228 04:47:11.663037 5014 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Feb 28 04:47:11 crc kubenswrapper[5014]: E0228 04:47:11.663129 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77ea3bfd-fad5-4789-8930-d7b7148453b2-plugin-serving-cert podName:77ea3bfd-fad5-4789-8930-d7b7148453b2 nodeName:}" failed. No retries permitted until 2026-02-28 04:47:12.163104653 +0000 UTC m=+820.833230633 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/77ea3bfd-fad5-4789-8930-d7b7148453b2-plugin-serving-cert") pod "nmstate-console-plugin-5dcbbd79cf-9dtw6" (UID: "77ea3bfd-fad5-4789-8930-d7b7148453b2") : secret "plugin-serving-cert" not found Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.663058 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/057e43b5-a9ff-43d5-9f75-e9add271d1a6-ovs-socket\") pod \"nmstate-handler-qn5jv\" (UID: \"057e43b5-a9ff-43d5-9f75-e9add271d1a6\") " pod="openshift-nmstate/nmstate-handler-qn5jv" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.663132 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/057e43b5-a9ff-43d5-9f75-e9add271d1a6-ovs-socket\") pod \"nmstate-handler-qn5jv\" (UID: \"057e43b5-a9ff-43d5-9f75-e9add271d1a6\") " pod="openshift-nmstate/nmstate-handler-qn5jv" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.663189 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/057e43b5-a9ff-43d5-9f75-e9add271d1a6-nmstate-lock\") pod \"nmstate-handler-qn5jv\" (UID: \"057e43b5-a9ff-43d5-9f75-e9add271d1a6\") " pod="openshift-nmstate/nmstate-handler-qn5jv" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.663231 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/057e43b5-a9ff-43d5-9f75-e9add271d1a6-nmstate-lock\") pod \"nmstate-handler-qn5jv\" (UID: \"057e43b5-a9ff-43d5-9f75-e9add271d1a6\") " pod="openshift-nmstate/nmstate-handler-qn5jv" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.663377 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7116c3b6-8ec4-42af-9739-9c4b1ea6e7c6-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-lpxlh\" (UID: \"7116c3b6-8ec4-42af-9739-9c4b1ea6e7c6\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-lpxlh" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.663505 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8xq4\" (UniqueName: \"kubernetes.io/projected/057e43b5-a9ff-43d5-9f75-e9add271d1a6-kube-api-access-r8xq4\") pod \"nmstate-handler-qn5jv\" (UID: \"057e43b5-a9ff-43d5-9f75-e9add271d1a6\") " pod="openshift-nmstate/nmstate-handler-qn5jv" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.663602 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/057e43b5-a9ff-43d5-9f75-e9add271d1a6-dbus-socket\") pod \"nmstate-handler-qn5jv\" (UID: \"057e43b5-a9ff-43d5-9f75-e9add271d1a6\") " pod="openshift-nmstate/nmstate-handler-qn5jv" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.663720 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/77ea3bfd-fad5-4789-8930-d7b7148453b2-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-9dtw6\" (UID: \"77ea3bfd-fad5-4789-8930-d7b7148453b2\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-9dtw6" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.663877 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jrwv\" (UniqueName: \"kubernetes.io/projected/7116c3b6-8ec4-42af-9739-9c4b1ea6e7c6-kube-api-access-6jrwv\") pod \"nmstate-webhook-786f45cff4-lpxlh\" (UID: \"7116c3b6-8ec4-42af-9739-9c4b1ea6e7c6\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-lpxlh" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.663897 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/057e43b5-a9ff-43d5-9f75-e9add271d1a6-dbus-socket\") pod \"nmstate-handler-qn5jv\" (UID: \"057e43b5-a9ff-43d5-9f75-e9add271d1a6\") " pod="openshift-nmstate/nmstate-handler-qn5jv" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.663981 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk46x\" (UniqueName: \"kubernetes.io/projected/77ea3bfd-fad5-4789-8930-d7b7148453b2-kube-api-access-fk46x\") pod \"nmstate-console-plugin-5dcbbd79cf-9dtw6\" (UID: \"77ea3bfd-fad5-4789-8930-d7b7148453b2\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-9dtw6" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.664648 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/77ea3bfd-fad5-4789-8930-d7b7148453b2-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-9dtw6\" (UID: \"77ea3bfd-fad5-4789-8930-d7b7148453b2\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-9dtw6" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.667862 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/7116c3b6-8ec4-42af-9739-9c4b1ea6e7c6-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-lpxlh\" (UID: \"7116c3b6-8ec4-42af-9739-9c4b1ea6e7c6\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-lpxlh" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.668074 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-qktq9" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.709974 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-884c8bf7b-5h2ml"] Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.710739 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.721600 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk46x\" (UniqueName: \"kubernetes.io/projected/77ea3bfd-fad5-4789-8930-d7b7148453b2-kube-api-access-fk46x\") pod \"nmstate-console-plugin-5dcbbd79cf-9dtw6\" (UID: \"77ea3bfd-fad5-4789-8930-d7b7148453b2\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-9dtw6" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.788297 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/005c4c70-5bc1-4b0c-bb6d-07f52e8b9321-service-ca\") pod \"console-884c8bf7b-5h2ml\" (UID: \"005c4c70-5bc1-4b0c-bb6d-07f52e8b9321\") " pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.788607 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/005c4c70-5bc1-4b0c-bb6d-07f52e8b9321-console-serving-cert\") pod \"console-884c8bf7b-5h2ml\" (UID: \"005c4c70-5bc1-4b0c-bb6d-07f52e8b9321\") " pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.788655 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dtqv\" (UniqueName: \"kubernetes.io/projected/005c4c70-5bc1-4b0c-bb6d-07f52e8b9321-kube-api-access-6dtqv\") pod \"console-884c8bf7b-5h2ml\" (UID: \"005c4c70-5bc1-4b0c-bb6d-07f52e8b9321\") " pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.788674 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/005c4c70-5bc1-4b0c-bb6d-07f52e8b9321-console-config\") pod \"console-884c8bf7b-5h2ml\" (UID: \"005c4c70-5bc1-4b0c-bb6d-07f52e8b9321\") " pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.788716 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/005c4c70-5bc1-4b0c-bb6d-07f52e8b9321-oauth-serving-cert\") pod \"console-884c8bf7b-5h2ml\" (UID: \"005c4c70-5bc1-4b0c-bb6d-07f52e8b9321\") " pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.788742 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/005c4c70-5bc1-4b0c-bb6d-07f52e8b9321-console-oauth-config\") pod \"console-884c8bf7b-5h2ml\" (UID: \"005c4c70-5bc1-4b0c-bb6d-07f52e8b9321\") " pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.788761 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/005c4c70-5bc1-4b0c-bb6d-07f52e8b9321-trusted-ca-bundle\") pod \"console-884c8bf7b-5h2ml\" (UID: \"005c4c70-5bc1-4b0c-bb6d-07f52e8b9321\") " pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.794778 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-884c8bf7b-5h2ml"] Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.813720 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jrwv\" (UniqueName: \"kubernetes.io/projected/7116c3b6-8ec4-42af-9739-9c4b1ea6e7c6-kube-api-access-6jrwv\") pod \"nmstate-webhook-786f45cff4-lpxlh\" (UID: \"7116c3b6-8ec4-42af-9739-9c4b1ea6e7c6\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-lpxlh" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.816422 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8xq4\" (UniqueName: \"kubernetes.io/projected/057e43b5-a9ff-43d5-9f75-e9add271d1a6-kube-api-access-r8xq4\") pod \"nmstate-handler-qn5jv\" (UID: \"057e43b5-a9ff-43d5-9f75-e9add271d1a6\") " pod="openshift-nmstate/nmstate-handler-qn5jv" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.890234 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/005c4c70-5bc1-4b0c-bb6d-07f52e8b9321-oauth-serving-cert\") pod \"console-884c8bf7b-5h2ml\" (UID: \"005c4c70-5bc1-4b0c-bb6d-07f52e8b9321\") " pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.890271 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/005c4c70-5bc1-4b0c-bb6d-07f52e8b9321-console-oauth-config\") pod \"console-884c8bf7b-5h2ml\" (UID: \"005c4c70-5bc1-4b0c-bb6d-07f52e8b9321\") " pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.890287 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/005c4c70-5bc1-4b0c-bb6d-07f52e8b9321-trusted-ca-bundle\") pod \"console-884c8bf7b-5h2ml\" (UID: \"005c4c70-5bc1-4b0c-bb6d-07f52e8b9321\") " pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.890339 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/005c4c70-5bc1-4b0c-bb6d-07f52e8b9321-service-ca\") pod \"console-884c8bf7b-5h2ml\" (UID: \"005c4c70-5bc1-4b0c-bb6d-07f52e8b9321\") " pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.890371 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/005c4c70-5bc1-4b0c-bb6d-07f52e8b9321-console-serving-cert\") pod \"console-884c8bf7b-5h2ml\" (UID: \"005c4c70-5bc1-4b0c-bb6d-07f52e8b9321\") " pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.890589 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-qktq9"] Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.890643 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dtqv\" (UniqueName: \"kubernetes.io/projected/005c4c70-5bc1-4b0c-bb6d-07f52e8b9321-kube-api-access-6dtqv\") pod \"console-884c8bf7b-5h2ml\" (UID: \"005c4c70-5bc1-4b0c-bb6d-07f52e8b9321\") " pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.890664 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/005c4c70-5bc1-4b0c-bb6d-07f52e8b9321-console-config\") pod \"console-884c8bf7b-5h2ml\" (UID: \"005c4c70-5bc1-4b0c-bb6d-07f52e8b9321\") " pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.891324 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/005c4c70-5bc1-4b0c-bb6d-07f52e8b9321-console-config\") pod \"console-884c8bf7b-5h2ml\" (UID: \"005c4c70-5bc1-4b0c-bb6d-07f52e8b9321\") " pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.893962 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/005c4c70-5bc1-4b0c-bb6d-07f52e8b9321-service-ca\") pod \"console-884c8bf7b-5h2ml\" (UID: \"005c4c70-5bc1-4b0c-bb6d-07f52e8b9321\") " pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.894241 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/005c4c70-5bc1-4b0c-bb6d-07f52e8b9321-oauth-serving-cert\") pod \"console-884c8bf7b-5h2ml\" (UID: \"005c4c70-5bc1-4b0c-bb6d-07f52e8b9321\") " pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.895006 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/005c4c70-5bc1-4b0c-bb6d-07f52e8b9321-console-serving-cert\") pod \"console-884c8bf7b-5h2ml\" (UID: \"005c4c70-5bc1-4b0c-bb6d-07f52e8b9321\") " pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.895007 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/005c4c70-5bc1-4b0c-bb6d-07f52e8b9321-console-oauth-config\") pod \"console-884c8bf7b-5h2ml\" (UID: \"005c4c70-5bc1-4b0c-bb6d-07f52e8b9321\") " pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.895242 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/005c4c70-5bc1-4b0c-bb6d-07f52e8b9321-trusted-ca-bundle\") pod \"console-884c8bf7b-5h2ml\" (UID: \"005c4c70-5bc1-4b0c-bb6d-07f52e8b9321\") " pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:11 crc kubenswrapper[5014]: W0228 04:47:11.895601 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72580a24_d267_4917_955f_639fb9600a27.slice/crio-9c6a57676e32be91f35ece025e623485acaf7d0d09fdbe45a28d8cc088ec760a WatchSource:0}: Error finding container 9c6a57676e32be91f35ece025e623485acaf7d0d09fdbe45a28d8cc088ec760a: Status 404 returned error can't find the container with id 9c6a57676e32be91f35ece025e623485acaf7d0d09fdbe45a28d8cc088ec760a Feb 28 04:47:11 crc kubenswrapper[5014]: I0228 04:47:11.912482 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dtqv\" (UniqueName: \"kubernetes.io/projected/005c4c70-5bc1-4b0c-bb6d-07f52e8b9321-kube-api-access-6dtqv\") pod \"console-884c8bf7b-5h2ml\" (UID: \"005c4c70-5bc1-4b0c-bb6d-07f52e8b9321\") " pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:12 crc kubenswrapper[5014]: I0228 04:47:12.030537 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-lpxlh" Feb 28 04:47:12 crc kubenswrapper[5014]: I0228 04:47:12.054018 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-qn5jv" Feb 28 04:47:12 crc kubenswrapper[5014]: W0228 04:47:12.073294 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod057e43b5_a9ff_43d5_9f75_e9add271d1a6.slice/crio-0767cf009c795e7f4332aac8f0f6988c115581c1243224b643ceb05edd56616b WatchSource:0}: Error finding container 0767cf009c795e7f4332aac8f0f6988c115581c1243224b643ceb05edd56616b: Status 404 returned error can't find the container with id 0767cf009c795e7f4332aac8f0f6988c115581c1243224b643ceb05edd56616b Feb 28 04:47:12 crc kubenswrapper[5014]: I0228 04:47:12.123893 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:12 crc kubenswrapper[5014]: I0228 04:47:12.196789 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/77ea3bfd-fad5-4789-8930-d7b7148453b2-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-9dtw6\" (UID: \"77ea3bfd-fad5-4789-8930-d7b7148453b2\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-9dtw6" Feb 28 04:47:12 crc kubenswrapper[5014]: I0228 04:47:12.237667 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/77ea3bfd-fad5-4789-8930-d7b7148453b2-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-9dtw6\" (UID: \"77ea3bfd-fad5-4789-8930-d7b7148453b2\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-9dtw6" Feb 28 04:47:12 crc kubenswrapper[5014]: I0228 04:47:12.363108 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-884c8bf7b-5h2ml"] Feb 28 04:47:12 crc kubenswrapper[5014]: W0228 04:47:12.367323 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod005c4c70_5bc1_4b0c_bb6d_07f52e8b9321.slice/crio-1b5388b71da7f9ce1701a1e9f05b439297cd66c71d6bedeca6db38d342b3cb48 WatchSource:0}: Error finding container 1b5388b71da7f9ce1701a1e9f05b439297cd66c71d6bedeca6db38d342b3cb48: Status 404 returned error can't find the container with id 1b5388b71da7f9ce1701a1e9f05b439297cd66c71d6bedeca6db38d342b3cb48 Feb 28 04:47:12 crc kubenswrapper[5014]: I0228 04:47:12.434535 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-9dtw6" Feb 28 04:47:12 crc kubenswrapper[5014]: I0228 04:47:12.453048 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-lpxlh"] Feb 28 04:47:12 crc kubenswrapper[5014]: W0228 04:47:12.461792 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7116c3b6_8ec4_42af_9739_9c4b1ea6e7c6.slice/crio-d7799e0dab727c8d247066c369b2fba21e0256daaccde42ef33ea23aa2a13f27 WatchSource:0}: Error finding container d7799e0dab727c8d247066c369b2fba21e0256daaccde42ef33ea23aa2a13f27: Status 404 returned error can't find the container with id d7799e0dab727c8d247066c369b2fba21e0256daaccde42ef33ea23aa2a13f27 Feb 28 04:47:12 crc kubenswrapper[5014]: I0228 04:47:12.639385 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-9dtw6"] Feb 28 04:47:12 crc kubenswrapper[5014]: W0228 04:47:12.644290 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77ea3bfd_fad5_4789_8930_d7b7148453b2.slice/crio-94bbd09ede381207aa5fb32c5d0d4892d73825e5bcd950729a5a96e2a9f33a93 WatchSource:0}: Error finding container 94bbd09ede381207aa5fb32c5d0d4892d73825e5bcd950729a5a96e2a9f33a93: Status 404 returned error can't find the container with id 94bbd09ede381207aa5fb32c5d0d4892d73825e5bcd950729a5a96e2a9f33a93 Feb 28 04:47:12 crc kubenswrapper[5014]: I0228 04:47:12.910870 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-qktq9" event={"ID":"72580a24-d267-4917-955f-639fb9600a27","Type":"ContainerStarted","Data":"9c6a57676e32be91f35ece025e623485acaf7d0d09fdbe45a28d8cc088ec760a"} Feb 28 04:47:12 crc kubenswrapper[5014]: I0228 04:47:12.913207 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-884c8bf7b-5h2ml" event={"ID":"005c4c70-5bc1-4b0c-bb6d-07f52e8b9321","Type":"ContainerStarted","Data":"462fc72bcf568c58266b5a21063ff1f9d8375edc5f0b01887181278166db7104"} Feb 28 04:47:12 crc kubenswrapper[5014]: I0228 04:47:12.913265 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-884c8bf7b-5h2ml" event={"ID":"005c4c70-5bc1-4b0c-bb6d-07f52e8b9321","Type":"ContainerStarted","Data":"1b5388b71da7f9ce1701a1e9f05b439297cd66c71d6bedeca6db38d342b3cb48"} Feb 28 04:47:12 crc kubenswrapper[5014]: I0228 04:47:12.916637 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-qn5jv" event={"ID":"057e43b5-a9ff-43d5-9f75-e9add271d1a6","Type":"ContainerStarted","Data":"0767cf009c795e7f4332aac8f0f6988c115581c1243224b643ceb05edd56616b"} Feb 28 04:47:12 crc kubenswrapper[5014]: I0228 04:47:12.918341 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-lpxlh" event={"ID":"7116c3b6-8ec4-42af-9739-9c4b1ea6e7c6","Type":"ContainerStarted","Data":"d7799e0dab727c8d247066c369b2fba21e0256daaccde42ef33ea23aa2a13f27"} Feb 28 04:47:12 crc kubenswrapper[5014]: I0228 04:47:12.919689 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-9dtw6" event={"ID":"77ea3bfd-fad5-4789-8930-d7b7148453b2","Type":"ContainerStarted","Data":"94bbd09ede381207aa5fb32c5d0d4892d73825e5bcd950729a5a96e2a9f33a93"} Feb 28 04:47:12 crc kubenswrapper[5014]: I0228 04:47:12.934640 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-884c8bf7b-5h2ml" podStartSLOduration=1.934614966 podStartE2EDuration="1.934614966s" podCreationTimestamp="2026-02-28 04:47:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:47:12.932841548 +0000 UTC m=+821.602967498" watchObservedRunningTime="2026-02-28 04:47:12.934614966 +0000 UTC m=+821.604740916" Feb 28 04:47:15 crc kubenswrapper[5014]: I0228 04:47:15.707309 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:47:15 crc kubenswrapper[5014]: I0228 04:47:15.707950 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:47:18 crc kubenswrapper[5014]: I0228 04:47:18.966268 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-lpxlh" event={"ID":"7116c3b6-8ec4-42af-9739-9c4b1ea6e7c6","Type":"ContainerStarted","Data":"3a2530fc5f39e6b8b59f6010a3888135e119681edea53c65b13657c338229099"} Feb 28 04:47:18 crc kubenswrapper[5014]: I0228 04:47:18.966698 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-lpxlh" Feb 28 04:47:18 crc kubenswrapper[5014]: I0228 04:47:18.969962 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-qktq9" event={"ID":"72580a24-d267-4917-955f-639fb9600a27","Type":"ContainerStarted","Data":"a2178a427d3b98b51d6dc4dcbb704597e06d1433c0dca2447f0059db8d8b8631"} Feb 28 04:47:18 crc kubenswrapper[5014]: I0228 04:47:18.973577 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-qn5jv" event={"ID":"057e43b5-a9ff-43d5-9f75-e9add271d1a6","Type":"ContainerStarted","Data":"002650a6ccf82f1ccdb4dc838d1632861593d16cecb0d1616cf258f93fb10d70"} Feb 28 04:47:18 crc kubenswrapper[5014]: I0228 04:47:18.973847 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-qn5jv" Feb 28 04:47:18 crc kubenswrapper[5014]: I0228 04:47:18.987189 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-786f45cff4-lpxlh" podStartSLOduration=2.262393049 podStartE2EDuration="7.987173986s" podCreationTimestamp="2026-02-28 04:47:11 +0000 UTC" firstStartedPulling="2026-02-28 04:47:12.465181827 +0000 UTC m=+821.135307737" lastFinishedPulling="2026-02-28 04:47:18.189962764 +0000 UTC m=+826.860088674" observedRunningTime="2026-02-28 04:47:18.983317303 +0000 UTC m=+827.653443213" watchObservedRunningTime="2026-02-28 04:47:18.987173986 +0000 UTC m=+827.657299896" Feb 28 04:47:19 crc kubenswrapper[5014]: I0228 04:47:19.010037 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-qn5jv" podStartSLOduration=1.919435187 podStartE2EDuration="8.010012581s" podCreationTimestamp="2026-02-28 04:47:11 +0000 UTC" firstStartedPulling="2026-02-28 04:47:12.075499606 +0000 UTC m=+820.745625556" lastFinishedPulling="2026-02-28 04:47:18.16607703 +0000 UTC m=+826.836202950" observedRunningTime="2026-02-28 04:47:19.004580446 +0000 UTC m=+827.674706366" watchObservedRunningTime="2026-02-28 04:47:19.010012581 +0000 UTC m=+827.680138531" Feb 28 04:47:19 crc kubenswrapper[5014]: I0228 04:47:19.981398 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-9dtw6" event={"ID":"77ea3bfd-fad5-4789-8930-d7b7148453b2","Type":"ContainerStarted","Data":"ac034ca3bc5498b2da0c81c8488abe34e32ee21f86aca8c997badc12fa22b70b"} Feb 28 04:47:19 crc kubenswrapper[5014]: I0228 04:47:19.997469 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-9dtw6" podStartSLOduration=2.758159678 podStartE2EDuration="8.997450096s" podCreationTimestamp="2026-02-28 04:47:11 +0000 UTC" firstStartedPulling="2026-02-28 04:47:12.646384706 +0000 UTC m=+821.316510616" lastFinishedPulling="2026-02-28 04:47:18.885675124 +0000 UTC m=+827.555801034" observedRunningTime="2026-02-28 04:47:19.995560215 +0000 UTC m=+828.665686125" watchObservedRunningTime="2026-02-28 04:47:19.997450096 +0000 UTC m=+828.667576006" Feb 28 04:47:22 crc kubenswrapper[5014]: I0228 04:47:22.038610 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-qktq9" event={"ID":"72580a24-d267-4917-955f-639fb9600a27","Type":"ContainerStarted","Data":"008528666ecee38db5361435ebcd5470c5643f495814ee6a20feff42e251c06e"} Feb 28 04:47:22 crc kubenswrapper[5014]: I0228 04:47:22.059180 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-69594cc75-qktq9" podStartSLOduration=1.8915744079999999 podStartE2EDuration="11.059160543s" podCreationTimestamp="2026-02-28 04:47:11 +0000 UTC" firstStartedPulling="2026-02-28 04:47:11.898442339 +0000 UTC m=+820.568568249" lastFinishedPulling="2026-02-28 04:47:21.066028474 +0000 UTC m=+829.736154384" observedRunningTime="2026-02-28 04:47:22.056932002 +0000 UTC m=+830.727057922" watchObservedRunningTime="2026-02-28 04:47:22.059160543 +0000 UTC m=+830.729286453" Feb 28 04:47:22 crc kubenswrapper[5014]: I0228 04:47:22.125106 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:22 crc kubenswrapper[5014]: I0228 04:47:22.125171 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:22 crc kubenswrapper[5014]: I0228 04:47:22.130793 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:23 crc kubenswrapper[5014]: I0228 04:47:23.057986 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-884c8bf7b-5h2ml" Feb 28 04:47:23 crc kubenswrapper[5014]: I0228 04:47:23.113766 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-n8xpb"] Feb 28 04:47:27 crc kubenswrapper[5014]: I0228 04:47:27.083983 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-qn5jv" Feb 28 04:47:28 crc kubenswrapper[5014]: I0228 04:47:28.855493 5014 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 28 04:47:32 crc kubenswrapper[5014]: I0228 04:47:32.039749 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-lpxlh" Feb 28 04:47:45 crc kubenswrapper[5014]: I0228 04:47:45.706658 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:47:45 crc kubenswrapper[5014]: I0228 04:47:45.707261 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:47:45 crc kubenswrapper[5014]: I0228 04:47:45.707312 5014 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 04:47:45 crc kubenswrapper[5014]: I0228 04:47:45.707974 5014 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3c623acb0fdab16e3036395527958cd8d0812619f2c3f18a285c60873b1031aa"} pod="openshift-machine-config-operator/machine-config-daemon-cct62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 04:47:45 crc kubenswrapper[5014]: I0228 04:47:45.708043 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" containerID="cri-o://3c623acb0fdab16e3036395527958cd8d0812619f2c3f18a285c60873b1031aa" gracePeriod=600 Feb 28 04:47:46 crc kubenswrapper[5014]: I0228 04:47:46.200873 5014 generic.go:334] "Generic (PLEG): container finished" podID="6aad0009-d904-48f8-8e30-82205907ece1" containerID="3c623acb0fdab16e3036395527958cd8d0812619f2c3f18a285c60873b1031aa" exitCode=0 Feb 28 04:47:46 crc kubenswrapper[5014]: I0228 04:47:46.200982 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerDied","Data":"3c623acb0fdab16e3036395527958cd8d0812619f2c3f18a285c60873b1031aa"} Feb 28 04:47:46 crc kubenswrapper[5014]: I0228 04:47:46.201177 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerStarted","Data":"cf1c2df486dbe48ee5a602ed54854b395ec2709d14f3810f6a23ce669b21c259"} Feb 28 04:47:46 crc kubenswrapper[5014]: I0228 04:47:46.201201 5014 scope.go:117] "RemoveContainer" containerID="76173414ea4b12400d46550fd1e95f3e073ea3c531dbe0f494a8f9363fcd0372" Feb 28 04:47:46 crc kubenswrapper[5014]: I0228 04:47:46.782249 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s"] Feb 28 04:47:46 crc kubenswrapper[5014]: I0228 04:47:46.785102 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s" Feb 28 04:47:46 crc kubenswrapper[5014]: I0228 04:47:46.788605 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 28 04:47:46 crc kubenswrapper[5014]: I0228 04:47:46.792107 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s"] Feb 28 04:47:46 crc kubenswrapper[5014]: I0228 04:47:46.928464 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44x42\" (UniqueName: \"kubernetes.io/projected/37c25bf9-a707-42db-9488-1cd660e44edc-kube-api-access-44x42\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s\" (UID: \"37c25bf9-a707-42db-9488-1cd660e44edc\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s" Feb 28 04:47:46 crc kubenswrapper[5014]: I0228 04:47:46.928626 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/37c25bf9-a707-42db-9488-1cd660e44edc-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s\" (UID: \"37c25bf9-a707-42db-9488-1cd660e44edc\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s" Feb 28 04:47:46 crc kubenswrapper[5014]: I0228 04:47:46.929057 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/37c25bf9-a707-42db-9488-1cd660e44edc-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s\" (UID: \"37c25bf9-a707-42db-9488-1cd660e44edc\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s" Feb 28 04:47:47 crc kubenswrapper[5014]: I0228 04:47:47.029471 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/37c25bf9-a707-42db-9488-1cd660e44edc-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s\" (UID: \"37c25bf9-a707-42db-9488-1cd660e44edc\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s" Feb 28 04:47:47 crc kubenswrapper[5014]: I0228 04:47:47.029539 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44x42\" (UniqueName: \"kubernetes.io/projected/37c25bf9-a707-42db-9488-1cd660e44edc-kube-api-access-44x42\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s\" (UID: \"37c25bf9-a707-42db-9488-1cd660e44edc\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s" Feb 28 04:47:47 crc kubenswrapper[5014]: I0228 04:47:47.029582 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/37c25bf9-a707-42db-9488-1cd660e44edc-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s\" (UID: \"37c25bf9-a707-42db-9488-1cd660e44edc\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s" Feb 28 04:47:47 crc kubenswrapper[5014]: I0228 04:47:47.030097 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/37c25bf9-a707-42db-9488-1cd660e44edc-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s\" (UID: \"37c25bf9-a707-42db-9488-1cd660e44edc\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s" Feb 28 04:47:47 crc kubenswrapper[5014]: I0228 04:47:47.031096 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/37c25bf9-a707-42db-9488-1cd660e44edc-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s\" (UID: \"37c25bf9-a707-42db-9488-1cd660e44edc\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s" Feb 28 04:47:47 crc kubenswrapper[5014]: I0228 04:47:47.047206 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44x42\" (UniqueName: \"kubernetes.io/projected/37c25bf9-a707-42db-9488-1cd660e44edc-kube-api-access-44x42\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s\" (UID: \"37c25bf9-a707-42db-9488-1cd660e44edc\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s" Feb 28 04:47:47 crc kubenswrapper[5014]: I0228 04:47:47.106363 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s" Feb 28 04:47:47 crc kubenswrapper[5014]: I0228 04:47:47.317071 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s"] Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.171754 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-n8xpb" podUID="8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b" containerName="console" containerID="cri-o://75a4c1bf46fae910a90bf0ac7440a9495f9a393b6dd20925567b7333f18b10cc" gracePeriod=15 Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.231283 5014 generic.go:334] "Generic (PLEG): container finished" podID="37c25bf9-a707-42db-9488-1cd660e44edc" containerID="c07f842d2256bea154ed1c4ad54d1449897cf335d1bcbddbc01023cbbcaf8c07" exitCode=0 Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.231342 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s" event={"ID":"37c25bf9-a707-42db-9488-1cd660e44edc","Type":"ContainerDied","Data":"c07f842d2256bea154ed1c4ad54d1449897cf335d1bcbddbc01023cbbcaf8c07"} Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.231755 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s" event={"ID":"37c25bf9-a707-42db-9488-1cd660e44edc","Type":"ContainerStarted","Data":"29825dbc6e78a77a852c6b231f577ed43e267dbcca6019e140af3ba3b74f5160"} Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.559064 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-n8xpb_8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b/console/0.log" Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.559157 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.751730 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-console-config\") pod \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.751771 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-service-ca\") pod \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.751793 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-trusted-ca-bundle\") pod \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.751840 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-console-oauth-config\") pod \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.751864 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-oauth-serving-cert\") pod \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.751897 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c64w8\" (UniqueName: \"kubernetes.io/projected/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-kube-api-access-c64w8\") pod \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.751931 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-console-serving-cert\") pod \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\" (UID: \"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b\") " Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.753086 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b" (UID: "8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.753140 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-console-config" (OuterVolumeSpecName: "console-config") pod "8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b" (UID: "8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.753401 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b" (UID: "8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.753703 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-service-ca" (OuterVolumeSpecName: "service-ca") pod "8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b" (UID: "8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.759104 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-kube-api-access-c64w8" (OuterVolumeSpecName: "kube-api-access-c64w8") pod "8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b" (UID: "8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b"). InnerVolumeSpecName "kube-api-access-c64w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.759216 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b" (UID: "8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.759580 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b" (UID: "8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.853133 5014 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-console-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.853171 5014 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-service-ca\") on node \"crc\" DevicePath \"\"" Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.853180 5014 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.853188 5014 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.853198 5014 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.853206 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c64w8\" (UniqueName: \"kubernetes.io/projected/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-kube-api-access-c64w8\") on node \"crc\" DevicePath \"\"" Feb 28 04:47:48 crc kubenswrapper[5014]: I0228 04:47:48.853215 5014 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.129458 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m8vmv"] Feb 28 04:47:49 crc kubenswrapper[5014]: E0228 04:47:49.129742 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b" containerName="console" Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.129756 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b" containerName="console" Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.129925 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b" containerName="console" Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.130825 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m8vmv" Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.146221 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m8vmv"] Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.248254 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-n8xpb_8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b/console/0.log" Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.248300 5014 generic.go:334] "Generic (PLEG): container finished" podID="8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b" containerID="75a4c1bf46fae910a90bf0ac7440a9495f9a393b6dd20925567b7333f18b10cc" exitCode=2 Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.248328 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n8xpb" event={"ID":"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b","Type":"ContainerDied","Data":"75a4c1bf46fae910a90bf0ac7440a9495f9a393b6dd20925567b7333f18b10cc"} Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.248354 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-n8xpb" event={"ID":"8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b","Type":"ContainerDied","Data":"7359ff212343f50eb0de16116ac02f48a69c02db1421cde9de4b0ce72b35f3e7"} Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.248370 5014 scope.go:117] "RemoveContainer" containerID="75a4c1bf46fae910a90bf0ac7440a9495f9a393b6dd20925567b7333f18b10cc" Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.248434 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-n8xpb" Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.263315 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d6bb27d-3ad7-49b1-8a17-2ca709952079-catalog-content\") pod \"redhat-operators-m8vmv\" (UID: \"0d6bb27d-3ad7-49b1-8a17-2ca709952079\") " pod="openshift-marketplace/redhat-operators-m8vmv" Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.263371 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d6bb27d-3ad7-49b1-8a17-2ca709952079-utilities\") pod \"redhat-operators-m8vmv\" (UID: \"0d6bb27d-3ad7-49b1-8a17-2ca709952079\") " pod="openshift-marketplace/redhat-operators-m8vmv" Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.263401 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftqd5\" (UniqueName: \"kubernetes.io/projected/0d6bb27d-3ad7-49b1-8a17-2ca709952079-kube-api-access-ftqd5\") pod \"redhat-operators-m8vmv\" (UID: \"0d6bb27d-3ad7-49b1-8a17-2ca709952079\") " pod="openshift-marketplace/redhat-operators-m8vmv" Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.268784 5014 scope.go:117] "RemoveContainer" containerID="75a4c1bf46fae910a90bf0ac7440a9495f9a393b6dd20925567b7333f18b10cc" Feb 28 04:47:49 crc kubenswrapper[5014]: E0228 04:47:49.269873 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75a4c1bf46fae910a90bf0ac7440a9495f9a393b6dd20925567b7333f18b10cc\": container with ID starting with 75a4c1bf46fae910a90bf0ac7440a9495f9a393b6dd20925567b7333f18b10cc not found: ID does not exist" containerID="75a4c1bf46fae910a90bf0ac7440a9495f9a393b6dd20925567b7333f18b10cc" Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.269916 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75a4c1bf46fae910a90bf0ac7440a9495f9a393b6dd20925567b7333f18b10cc"} err="failed to get container status \"75a4c1bf46fae910a90bf0ac7440a9495f9a393b6dd20925567b7333f18b10cc\": rpc error: code = NotFound desc = could not find container \"75a4c1bf46fae910a90bf0ac7440a9495f9a393b6dd20925567b7333f18b10cc\": container with ID starting with 75a4c1bf46fae910a90bf0ac7440a9495f9a393b6dd20925567b7333f18b10cc not found: ID does not exist" Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.280319 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-n8xpb"] Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.285639 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-n8xpb"] Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.365105 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d6bb27d-3ad7-49b1-8a17-2ca709952079-catalog-content\") pod \"redhat-operators-m8vmv\" (UID: \"0d6bb27d-3ad7-49b1-8a17-2ca709952079\") " pod="openshift-marketplace/redhat-operators-m8vmv" Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.365195 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d6bb27d-3ad7-49b1-8a17-2ca709952079-utilities\") pod \"redhat-operators-m8vmv\" (UID: \"0d6bb27d-3ad7-49b1-8a17-2ca709952079\") " pod="openshift-marketplace/redhat-operators-m8vmv" Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.365227 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftqd5\" (UniqueName: \"kubernetes.io/projected/0d6bb27d-3ad7-49b1-8a17-2ca709952079-kube-api-access-ftqd5\") pod \"redhat-operators-m8vmv\" (UID: \"0d6bb27d-3ad7-49b1-8a17-2ca709952079\") " pod="openshift-marketplace/redhat-operators-m8vmv" Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.365671 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d6bb27d-3ad7-49b1-8a17-2ca709952079-catalog-content\") pod \"redhat-operators-m8vmv\" (UID: \"0d6bb27d-3ad7-49b1-8a17-2ca709952079\") " pod="openshift-marketplace/redhat-operators-m8vmv" Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.366073 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d6bb27d-3ad7-49b1-8a17-2ca709952079-utilities\") pod \"redhat-operators-m8vmv\" (UID: \"0d6bb27d-3ad7-49b1-8a17-2ca709952079\") " pod="openshift-marketplace/redhat-operators-m8vmv" Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.381067 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftqd5\" (UniqueName: \"kubernetes.io/projected/0d6bb27d-3ad7-49b1-8a17-2ca709952079-kube-api-access-ftqd5\") pod \"redhat-operators-m8vmv\" (UID: \"0d6bb27d-3ad7-49b1-8a17-2ca709952079\") " pod="openshift-marketplace/redhat-operators-m8vmv" Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.449020 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m8vmv" Feb 28 04:47:49 crc kubenswrapper[5014]: I0228 04:47:49.839519 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m8vmv"] Feb 28 04:47:50 crc kubenswrapper[5014]: I0228 04:47:50.179630 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b" path="/var/lib/kubelet/pods/8c2b26b3-46ec-438c-8fa4-ffb3c5a9e66b/volumes" Feb 28 04:47:50 crc kubenswrapper[5014]: I0228 04:47:50.257602 5014 generic.go:334] "Generic (PLEG): container finished" podID="0d6bb27d-3ad7-49b1-8a17-2ca709952079" containerID="e132d4106646d071c24a939b4132e2142c413514fedf6f5bb09dcaed77454e1e" exitCode=0 Feb 28 04:47:50 crc kubenswrapper[5014]: I0228 04:47:50.257644 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8vmv" event={"ID":"0d6bb27d-3ad7-49b1-8a17-2ca709952079","Type":"ContainerDied","Data":"e132d4106646d071c24a939b4132e2142c413514fedf6f5bb09dcaed77454e1e"} Feb 28 04:47:50 crc kubenswrapper[5014]: I0228 04:47:50.257682 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8vmv" event={"ID":"0d6bb27d-3ad7-49b1-8a17-2ca709952079","Type":"ContainerStarted","Data":"4c9f7ec94f45ea2cd698e5dffe1588afd56f5e8223c3e4f5a6651a3817b2bafb"} Feb 28 04:47:50 crc kubenswrapper[5014]: I0228 04:47:50.260287 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s" event={"ID":"37c25bf9-a707-42db-9488-1cd660e44edc","Type":"ContainerStarted","Data":"ff3b309c3f4e1fb6e11c1d65cf63cc00a058e82b43c1b48590ba52e3a3a46e0d"} Feb 28 04:47:51 crc kubenswrapper[5014]: I0228 04:47:51.270001 5014 generic.go:334] "Generic (PLEG): container finished" podID="37c25bf9-a707-42db-9488-1cd660e44edc" containerID="ff3b309c3f4e1fb6e11c1d65cf63cc00a058e82b43c1b48590ba52e3a3a46e0d" exitCode=0 Feb 28 04:47:51 crc kubenswrapper[5014]: I0228 04:47:51.270072 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s" event={"ID":"37c25bf9-a707-42db-9488-1cd660e44edc","Type":"ContainerDied","Data":"ff3b309c3f4e1fb6e11c1d65cf63cc00a058e82b43c1b48590ba52e3a3a46e0d"} Feb 28 04:47:52 crc kubenswrapper[5014]: I0228 04:47:52.277574 5014 generic.go:334] "Generic (PLEG): container finished" podID="0d6bb27d-3ad7-49b1-8a17-2ca709952079" containerID="d91672c33f9e7404ee70ada9828fc03397541a9593752f14b3dfa9ab8c92f7bb" exitCode=0 Feb 28 04:47:52 crc kubenswrapper[5014]: I0228 04:47:52.277856 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8vmv" event={"ID":"0d6bb27d-3ad7-49b1-8a17-2ca709952079","Type":"ContainerDied","Data":"d91672c33f9e7404ee70ada9828fc03397541a9593752f14b3dfa9ab8c92f7bb"} Feb 28 04:47:52 crc kubenswrapper[5014]: I0228 04:47:52.286485 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s" event={"ID":"37c25bf9-a707-42db-9488-1cd660e44edc","Type":"ContainerDied","Data":"98eff422529b7326524240f425b9255fd5977472af790ba9c7e1229c390f5119"} Feb 28 04:47:52 crc kubenswrapper[5014]: I0228 04:47:52.286564 5014 generic.go:334] "Generic (PLEG): container finished" podID="37c25bf9-a707-42db-9488-1cd660e44edc" containerID="98eff422529b7326524240f425b9255fd5977472af790ba9c7e1229c390f5119" exitCode=0 Feb 28 04:47:53 crc kubenswrapper[5014]: I0228 04:47:53.296526 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8vmv" event={"ID":"0d6bb27d-3ad7-49b1-8a17-2ca709952079","Type":"ContainerStarted","Data":"583ecc7199e06c9ac9e8ca52566c45c19b73acc5e067f3362aa5046eb5ff2da3"} Feb 28 04:47:53 crc kubenswrapper[5014]: I0228 04:47:53.334783 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m8vmv" podStartSLOduration=1.785344941 podStartE2EDuration="4.334756089s" podCreationTimestamp="2026-02-28 04:47:49 +0000 UTC" firstStartedPulling="2026-02-28 04:47:50.259222356 +0000 UTC m=+858.929348266" lastFinishedPulling="2026-02-28 04:47:52.808633494 +0000 UTC m=+861.478759414" observedRunningTime="2026-02-28 04:47:53.328993214 +0000 UTC m=+861.999119134" watchObservedRunningTime="2026-02-28 04:47:53.334756089 +0000 UTC m=+862.004882009" Feb 28 04:47:53 crc kubenswrapper[5014]: I0228 04:47:53.578425 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s" Feb 28 04:47:53 crc kubenswrapper[5014]: I0228 04:47:53.723343 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44x42\" (UniqueName: \"kubernetes.io/projected/37c25bf9-a707-42db-9488-1cd660e44edc-kube-api-access-44x42\") pod \"37c25bf9-a707-42db-9488-1cd660e44edc\" (UID: \"37c25bf9-a707-42db-9488-1cd660e44edc\") " Feb 28 04:47:53 crc kubenswrapper[5014]: I0228 04:47:53.723608 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/37c25bf9-a707-42db-9488-1cd660e44edc-bundle\") pod \"37c25bf9-a707-42db-9488-1cd660e44edc\" (UID: \"37c25bf9-a707-42db-9488-1cd660e44edc\") " Feb 28 04:47:53 crc kubenswrapper[5014]: I0228 04:47:53.723849 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/37c25bf9-a707-42db-9488-1cd660e44edc-util\") pod \"37c25bf9-a707-42db-9488-1cd660e44edc\" (UID: \"37c25bf9-a707-42db-9488-1cd660e44edc\") " Feb 28 04:47:53 crc kubenswrapper[5014]: I0228 04:47:53.724781 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37c25bf9-a707-42db-9488-1cd660e44edc-bundle" (OuterVolumeSpecName: "bundle") pod "37c25bf9-a707-42db-9488-1cd660e44edc" (UID: "37c25bf9-a707-42db-9488-1cd660e44edc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:47:53 crc kubenswrapper[5014]: I0228 04:47:53.741100 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37c25bf9-a707-42db-9488-1cd660e44edc-kube-api-access-44x42" (OuterVolumeSpecName: "kube-api-access-44x42") pod "37c25bf9-a707-42db-9488-1cd660e44edc" (UID: "37c25bf9-a707-42db-9488-1cd660e44edc"). InnerVolumeSpecName "kube-api-access-44x42". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:47:53 crc kubenswrapper[5014]: I0228 04:47:53.745530 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37c25bf9-a707-42db-9488-1cd660e44edc-util" (OuterVolumeSpecName: "util") pod "37c25bf9-a707-42db-9488-1cd660e44edc" (UID: "37c25bf9-a707-42db-9488-1cd660e44edc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:47:53 crc kubenswrapper[5014]: I0228 04:47:53.825435 5014 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/37c25bf9-a707-42db-9488-1cd660e44edc-util\") on node \"crc\" DevicePath \"\"" Feb 28 04:47:53 crc kubenswrapper[5014]: I0228 04:47:53.825719 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44x42\" (UniqueName: \"kubernetes.io/projected/37c25bf9-a707-42db-9488-1cd660e44edc-kube-api-access-44x42\") on node \"crc\" DevicePath \"\"" Feb 28 04:47:53 crc kubenswrapper[5014]: I0228 04:47:53.825780 5014 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/37c25bf9-a707-42db-9488-1cd660e44edc-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:47:54 crc kubenswrapper[5014]: I0228 04:47:54.309203 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s" event={"ID":"37c25bf9-a707-42db-9488-1cd660e44edc","Type":"ContainerDied","Data":"29825dbc6e78a77a852c6b231f577ed43e267dbcca6019e140af3ba3b74f5160"} Feb 28 04:47:54 crc kubenswrapper[5014]: I0228 04:47:54.309264 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29825dbc6e78a77a852c6b231f577ed43e267dbcca6019e140af3ba3b74f5160" Feb 28 04:47:54 crc kubenswrapper[5014]: I0228 04:47:54.309304 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s" Feb 28 04:47:59 crc kubenswrapper[5014]: I0228 04:47:59.449678 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m8vmv" Feb 28 04:47:59 crc kubenswrapper[5014]: I0228 04:47:59.450082 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m8vmv" Feb 28 04:48:00 crc kubenswrapper[5014]: I0228 04:48:00.121015 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537568-f9lhv"] Feb 28 04:48:00 crc kubenswrapper[5014]: E0228 04:48:00.121287 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c25bf9-a707-42db-9488-1cd660e44edc" containerName="util" Feb 28 04:48:00 crc kubenswrapper[5014]: I0228 04:48:00.121313 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c25bf9-a707-42db-9488-1cd660e44edc" containerName="util" Feb 28 04:48:00 crc kubenswrapper[5014]: E0228 04:48:00.121325 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c25bf9-a707-42db-9488-1cd660e44edc" containerName="extract" Feb 28 04:48:00 crc kubenswrapper[5014]: I0228 04:48:00.121334 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c25bf9-a707-42db-9488-1cd660e44edc" containerName="extract" Feb 28 04:48:00 crc kubenswrapper[5014]: E0228 04:48:00.121344 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37c25bf9-a707-42db-9488-1cd660e44edc" containerName="pull" Feb 28 04:48:00 crc kubenswrapper[5014]: I0228 04:48:00.121351 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="37c25bf9-a707-42db-9488-1cd660e44edc" containerName="pull" Feb 28 04:48:00 crc kubenswrapper[5014]: I0228 04:48:00.121480 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="37c25bf9-a707-42db-9488-1cd660e44edc" containerName="extract" Feb 28 04:48:00 crc kubenswrapper[5014]: I0228 04:48:00.121975 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537568-f9lhv" Feb 28 04:48:00 crc kubenswrapper[5014]: I0228 04:48:00.123739 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:48:00 crc kubenswrapper[5014]: I0228 04:48:00.123901 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:48:00 crc kubenswrapper[5014]: I0228 04:48:00.125995 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 04:48:00 crc kubenswrapper[5014]: I0228 04:48:00.130553 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537568-f9lhv"] Feb 28 04:48:00 crc kubenswrapper[5014]: I0228 04:48:00.284537 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb5bv\" (UniqueName: \"kubernetes.io/projected/337102fc-d918-4401-a98b-0903531566b9-kube-api-access-pb5bv\") pod \"auto-csr-approver-29537568-f9lhv\" (UID: \"337102fc-d918-4401-a98b-0903531566b9\") " pod="openshift-infra/auto-csr-approver-29537568-f9lhv" Feb 28 04:48:00 crc kubenswrapper[5014]: I0228 04:48:00.386415 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pb5bv\" (UniqueName: \"kubernetes.io/projected/337102fc-d918-4401-a98b-0903531566b9-kube-api-access-pb5bv\") pod \"auto-csr-approver-29537568-f9lhv\" (UID: \"337102fc-d918-4401-a98b-0903531566b9\") " pod="openshift-infra/auto-csr-approver-29537568-f9lhv" Feb 28 04:48:00 crc kubenswrapper[5014]: I0228 04:48:00.407040 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pb5bv\" (UniqueName: \"kubernetes.io/projected/337102fc-d918-4401-a98b-0903531566b9-kube-api-access-pb5bv\") pod \"auto-csr-approver-29537568-f9lhv\" (UID: \"337102fc-d918-4401-a98b-0903531566b9\") " pod="openshift-infra/auto-csr-approver-29537568-f9lhv" Feb 28 04:48:00 crc kubenswrapper[5014]: I0228 04:48:00.444738 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537568-f9lhv" Feb 28 04:48:00 crc kubenswrapper[5014]: I0228 04:48:00.489298 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m8vmv" podUID="0d6bb27d-3ad7-49b1-8a17-2ca709952079" containerName="registry-server" probeResult="failure" output=< Feb 28 04:48:00 crc kubenswrapper[5014]: timeout: failed to connect service ":50051" within 1s Feb 28 04:48:00 crc kubenswrapper[5014]: > Feb 28 04:48:00 crc kubenswrapper[5014]: I0228 04:48:00.791334 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537568-f9lhv"] Feb 28 04:48:01 crc kubenswrapper[5014]: I0228 04:48:01.347151 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537568-f9lhv" event={"ID":"337102fc-d918-4401-a98b-0903531566b9","Type":"ContainerStarted","Data":"8e04b07e809860c7f53eda4dcd91a4ad4971588fdfe06cb3955e1be8c2df62c5"} Feb 28 04:48:02 crc kubenswrapper[5014]: I0228 04:48:02.997699 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-c97d79cb8-9k7r6"] Feb 28 04:48:02 crc kubenswrapper[5014]: I0228 04:48:02.998638 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-c97d79cb8-9k7r6" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.001922 5014 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.002106 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.002224 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.002659 5014 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.002788 5014 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-pppqf" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.025032 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-c97d79cb8-9k7r6"] Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.124724 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7765e634-9939-4dca-82bc-847db81c81e4-webhook-cert\") pod \"metallb-operator-controller-manager-c97d79cb8-9k7r6\" (UID: \"7765e634-9939-4dca-82bc-847db81c81e4\") " pod="metallb-system/metallb-operator-controller-manager-c97d79cb8-9k7r6" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.124790 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7t9w\" (UniqueName: \"kubernetes.io/projected/7765e634-9939-4dca-82bc-847db81c81e4-kube-api-access-f7t9w\") pod \"metallb-operator-controller-manager-c97d79cb8-9k7r6\" (UID: \"7765e634-9939-4dca-82bc-847db81c81e4\") " pod="metallb-system/metallb-operator-controller-manager-c97d79cb8-9k7r6" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.124898 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7765e634-9939-4dca-82bc-847db81c81e4-apiservice-cert\") pod \"metallb-operator-controller-manager-c97d79cb8-9k7r6\" (UID: \"7765e634-9939-4dca-82bc-847db81c81e4\") " pod="metallb-system/metallb-operator-controller-manager-c97d79cb8-9k7r6" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.226002 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7765e634-9939-4dca-82bc-847db81c81e4-webhook-cert\") pod \"metallb-operator-controller-manager-c97d79cb8-9k7r6\" (UID: \"7765e634-9939-4dca-82bc-847db81c81e4\") " pod="metallb-system/metallb-operator-controller-manager-c97d79cb8-9k7r6" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.226353 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7t9w\" (UniqueName: \"kubernetes.io/projected/7765e634-9939-4dca-82bc-847db81c81e4-kube-api-access-f7t9w\") pod \"metallb-operator-controller-manager-c97d79cb8-9k7r6\" (UID: \"7765e634-9939-4dca-82bc-847db81c81e4\") " pod="metallb-system/metallb-operator-controller-manager-c97d79cb8-9k7r6" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.226968 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7765e634-9939-4dca-82bc-847db81c81e4-apiservice-cert\") pod \"metallb-operator-controller-manager-c97d79cb8-9k7r6\" (UID: \"7765e634-9939-4dca-82bc-847db81c81e4\") " pod="metallb-system/metallb-operator-controller-manager-c97d79cb8-9k7r6" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.236056 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7765e634-9939-4dca-82bc-847db81c81e4-apiservice-cert\") pod \"metallb-operator-controller-manager-c97d79cb8-9k7r6\" (UID: \"7765e634-9939-4dca-82bc-847db81c81e4\") " pod="metallb-system/metallb-operator-controller-manager-c97d79cb8-9k7r6" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.238675 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7765e634-9939-4dca-82bc-847db81c81e4-webhook-cert\") pod \"metallb-operator-controller-manager-c97d79cb8-9k7r6\" (UID: \"7765e634-9939-4dca-82bc-847db81c81e4\") " pod="metallb-system/metallb-operator-controller-manager-c97d79cb8-9k7r6" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.257642 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7t9w\" (UniqueName: \"kubernetes.io/projected/7765e634-9939-4dca-82bc-847db81c81e4-kube-api-access-f7t9w\") pod \"metallb-operator-controller-manager-c97d79cb8-9k7r6\" (UID: \"7765e634-9939-4dca-82bc-847db81c81e4\") " pod="metallb-system/metallb-operator-controller-manager-c97d79cb8-9k7r6" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.365388 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537568-f9lhv" event={"ID":"337102fc-d918-4401-a98b-0903531566b9","Type":"ContainerStarted","Data":"5f1a677503627726f8500aa93d09bc9493c95a61fd1567c361904b444c215213"} Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.380117 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537568-f9lhv" podStartSLOduration=1.304983925 podStartE2EDuration="3.380100785s" podCreationTimestamp="2026-02-28 04:48:00 +0000 UTC" firstStartedPulling="2026-02-28 04:48:00.799952119 +0000 UTC m=+869.470078039" lastFinishedPulling="2026-02-28 04:48:02.875068989 +0000 UTC m=+871.545194899" observedRunningTime="2026-02-28 04:48:03.37843222 +0000 UTC m=+872.048558140" watchObservedRunningTime="2026-02-28 04:48:03.380100785 +0000 UTC m=+872.050226695" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.424886 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-c97d79cb8-9k7r6" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.568093 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-75b5fcbdc5-txj9m"] Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.569010 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-75b5fcbdc5-txj9m" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.573477 5014 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-5vbjg" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.573764 5014 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.580381 5014 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.588627 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-75b5fcbdc5-txj9m"] Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.732846 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fec123b5-34af-438f-8a38-306d3484b235-webhook-cert\") pod \"metallb-operator-webhook-server-75b5fcbdc5-txj9m\" (UID: \"fec123b5-34af-438f-8a38-306d3484b235\") " pod="metallb-system/metallb-operator-webhook-server-75b5fcbdc5-txj9m" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.733246 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd8nv\" (UniqueName: \"kubernetes.io/projected/fec123b5-34af-438f-8a38-306d3484b235-kube-api-access-wd8nv\") pod \"metallb-operator-webhook-server-75b5fcbdc5-txj9m\" (UID: \"fec123b5-34af-438f-8a38-306d3484b235\") " pod="metallb-system/metallb-operator-webhook-server-75b5fcbdc5-txj9m" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.733350 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fec123b5-34af-438f-8a38-306d3484b235-apiservice-cert\") pod \"metallb-operator-webhook-server-75b5fcbdc5-txj9m\" (UID: \"fec123b5-34af-438f-8a38-306d3484b235\") " pod="metallb-system/metallb-operator-webhook-server-75b5fcbdc5-txj9m" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.835611 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fec123b5-34af-438f-8a38-306d3484b235-webhook-cert\") pod \"metallb-operator-webhook-server-75b5fcbdc5-txj9m\" (UID: \"fec123b5-34af-438f-8a38-306d3484b235\") " pod="metallb-system/metallb-operator-webhook-server-75b5fcbdc5-txj9m" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.835680 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd8nv\" (UniqueName: \"kubernetes.io/projected/fec123b5-34af-438f-8a38-306d3484b235-kube-api-access-wd8nv\") pod \"metallb-operator-webhook-server-75b5fcbdc5-txj9m\" (UID: \"fec123b5-34af-438f-8a38-306d3484b235\") " pod="metallb-system/metallb-operator-webhook-server-75b5fcbdc5-txj9m" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.835727 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fec123b5-34af-438f-8a38-306d3484b235-apiservice-cert\") pod \"metallb-operator-webhook-server-75b5fcbdc5-txj9m\" (UID: \"fec123b5-34af-438f-8a38-306d3484b235\") " pod="metallb-system/metallb-operator-webhook-server-75b5fcbdc5-txj9m" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.839231 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fec123b5-34af-438f-8a38-306d3484b235-webhook-cert\") pod \"metallb-operator-webhook-server-75b5fcbdc5-txj9m\" (UID: \"fec123b5-34af-438f-8a38-306d3484b235\") " pod="metallb-system/metallb-operator-webhook-server-75b5fcbdc5-txj9m" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.840336 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fec123b5-34af-438f-8a38-306d3484b235-apiservice-cert\") pod \"metallb-operator-webhook-server-75b5fcbdc5-txj9m\" (UID: \"fec123b5-34af-438f-8a38-306d3484b235\") " pod="metallb-system/metallb-operator-webhook-server-75b5fcbdc5-txj9m" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.852604 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd8nv\" (UniqueName: \"kubernetes.io/projected/fec123b5-34af-438f-8a38-306d3484b235-kube-api-access-wd8nv\") pod \"metallb-operator-webhook-server-75b5fcbdc5-txj9m\" (UID: \"fec123b5-34af-438f-8a38-306d3484b235\") " pod="metallb-system/metallb-operator-webhook-server-75b5fcbdc5-txj9m" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.885597 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-75b5fcbdc5-txj9m" Feb 28 04:48:03 crc kubenswrapper[5014]: I0228 04:48:03.903671 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-c97d79cb8-9k7r6"] Feb 28 04:48:03 crc kubenswrapper[5014]: W0228 04:48:03.916454 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7765e634_9939_4dca_82bc_847db81c81e4.slice/crio-70ff2471d1015f8c29e57a7fd5b7db2dc1531d1da31413f7fd261bba07375671 WatchSource:0}: Error finding container 70ff2471d1015f8c29e57a7fd5b7db2dc1531d1da31413f7fd261bba07375671: Status 404 returned error can't find the container with id 70ff2471d1015f8c29e57a7fd5b7db2dc1531d1da31413f7fd261bba07375671 Feb 28 04:48:04 crc kubenswrapper[5014]: I0228 04:48:04.113296 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-75b5fcbdc5-txj9m"] Feb 28 04:48:04 crc kubenswrapper[5014]: I0228 04:48:04.372405 5014 generic.go:334] "Generic (PLEG): container finished" podID="337102fc-d918-4401-a98b-0903531566b9" containerID="5f1a677503627726f8500aa93d09bc9493c95a61fd1567c361904b444c215213" exitCode=0 Feb 28 04:48:04 crc kubenswrapper[5014]: I0228 04:48:04.372498 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537568-f9lhv" event={"ID":"337102fc-d918-4401-a98b-0903531566b9","Type":"ContainerDied","Data":"5f1a677503627726f8500aa93d09bc9493c95a61fd1567c361904b444c215213"} Feb 28 04:48:04 crc kubenswrapper[5014]: I0228 04:48:04.375449 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-c97d79cb8-9k7r6" event={"ID":"7765e634-9939-4dca-82bc-847db81c81e4","Type":"ContainerStarted","Data":"70ff2471d1015f8c29e57a7fd5b7db2dc1531d1da31413f7fd261bba07375671"} Feb 28 04:48:04 crc kubenswrapper[5014]: I0228 04:48:04.376862 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-75b5fcbdc5-txj9m" event={"ID":"fec123b5-34af-438f-8a38-306d3484b235","Type":"ContainerStarted","Data":"cb4afb35558bbb695ae0c7261aa265c005f6adcc98017d155b3ea9a6dbcded81"} Feb 28 04:48:05 crc kubenswrapper[5014]: I0228 04:48:05.642884 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537568-f9lhv" Feb 28 04:48:05 crc kubenswrapper[5014]: I0228 04:48:05.761249 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pb5bv\" (UniqueName: \"kubernetes.io/projected/337102fc-d918-4401-a98b-0903531566b9-kube-api-access-pb5bv\") pod \"337102fc-d918-4401-a98b-0903531566b9\" (UID: \"337102fc-d918-4401-a98b-0903531566b9\") " Feb 28 04:48:05 crc kubenswrapper[5014]: I0228 04:48:05.766623 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/337102fc-d918-4401-a98b-0903531566b9-kube-api-access-pb5bv" (OuterVolumeSpecName: "kube-api-access-pb5bv") pod "337102fc-d918-4401-a98b-0903531566b9" (UID: "337102fc-d918-4401-a98b-0903531566b9"). InnerVolumeSpecName "kube-api-access-pb5bv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:48:05 crc kubenswrapper[5014]: I0228 04:48:05.862618 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pb5bv\" (UniqueName: \"kubernetes.io/projected/337102fc-d918-4401-a98b-0903531566b9-kube-api-access-pb5bv\") on node \"crc\" DevicePath \"\"" Feb 28 04:48:06 crc kubenswrapper[5014]: I0228 04:48:06.394083 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537568-f9lhv" event={"ID":"337102fc-d918-4401-a98b-0903531566b9","Type":"ContainerDied","Data":"8e04b07e809860c7f53eda4dcd91a4ad4971588fdfe06cb3955e1be8c2df62c5"} Feb 28 04:48:06 crc kubenswrapper[5014]: I0228 04:48:06.394334 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e04b07e809860c7f53eda4dcd91a4ad4971588fdfe06cb3955e1be8c2df62c5" Feb 28 04:48:06 crc kubenswrapper[5014]: I0228 04:48:06.394175 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537568-f9lhv" Feb 28 04:48:06 crc kubenswrapper[5014]: I0228 04:48:06.441107 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537562-gm7z8"] Feb 28 04:48:06 crc kubenswrapper[5014]: I0228 04:48:06.448626 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537562-gm7z8"] Feb 28 04:48:08 crc kubenswrapper[5014]: I0228 04:48:08.183503 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e29226d0-8e4d-4cd1-9353-d7b85b709a7c" path="/var/lib/kubelet/pods/e29226d0-8e4d-4cd1-9353-d7b85b709a7c/volumes" Feb 28 04:48:09 crc kubenswrapper[5014]: I0228 04:48:09.502031 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m8vmv" Feb 28 04:48:09 crc kubenswrapper[5014]: I0228 04:48:09.545621 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m8vmv" Feb 28 04:48:09 crc kubenswrapper[5014]: I0228 04:48:09.730078 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m8vmv"] Feb 28 04:48:11 crc kubenswrapper[5014]: I0228 04:48:11.422524 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-75b5fcbdc5-txj9m" event={"ID":"fec123b5-34af-438f-8a38-306d3484b235","Type":"ContainerStarted","Data":"c7586ec4bf4d1230f82156a8fd649e4445ae8e66bdb971b95037b426706a1de4"} Feb 28 04:48:11 crc kubenswrapper[5014]: I0228 04:48:11.422954 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-75b5fcbdc5-txj9m" Feb 28 04:48:11 crc kubenswrapper[5014]: I0228 04:48:11.424667 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-c97d79cb8-9k7r6" event={"ID":"7765e634-9939-4dca-82bc-847db81c81e4","Type":"ContainerStarted","Data":"dea673e79fe4699efcac5cf8165b2b2a910f1f1251a874819fe2507b268a83fb"} Feb 28 04:48:11 crc kubenswrapper[5014]: I0228 04:48:11.424721 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-c97d79cb8-9k7r6" Feb 28 04:48:11 crc kubenswrapper[5014]: I0228 04:48:11.424916 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m8vmv" podUID="0d6bb27d-3ad7-49b1-8a17-2ca709952079" containerName="registry-server" containerID="cri-o://583ecc7199e06c9ac9e8ca52566c45c19b73acc5e067f3362aa5046eb5ff2da3" gracePeriod=2 Feb 28 04:48:11 crc kubenswrapper[5014]: I0228 04:48:11.441100 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-75b5fcbdc5-txj9m" podStartSLOduration=1.5217315120000001 podStartE2EDuration="8.441082052s" podCreationTimestamp="2026-02-28 04:48:03 +0000 UTC" firstStartedPulling="2026-02-28 04:48:04.133367225 +0000 UTC m=+872.803493135" lastFinishedPulling="2026-02-28 04:48:11.052717765 +0000 UTC m=+879.722843675" observedRunningTime="2026-02-28 04:48:11.440258559 +0000 UTC m=+880.110384469" watchObservedRunningTime="2026-02-28 04:48:11.441082052 +0000 UTC m=+880.111207962" Feb 28 04:48:11 crc kubenswrapper[5014]: I0228 04:48:11.475603 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-c97d79cb8-9k7r6" podStartSLOduration=2.349028639 podStartE2EDuration="9.475582763s" podCreationTimestamp="2026-02-28 04:48:02 +0000 UTC" firstStartedPulling="2026-02-28 04:48:03.919227913 +0000 UTC m=+872.589353823" lastFinishedPulling="2026-02-28 04:48:11.045782037 +0000 UTC m=+879.715907947" observedRunningTime="2026-02-28 04:48:11.469732255 +0000 UTC m=+880.139858215" watchObservedRunningTime="2026-02-28 04:48:11.475582763 +0000 UTC m=+880.145708673" Feb 28 04:48:11 crc kubenswrapper[5014]: I0228 04:48:11.819537 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m8vmv" Feb 28 04:48:11 crc kubenswrapper[5014]: I0228 04:48:11.947160 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d6bb27d-3ad7-49b1-8a17-2ca709952079-catalog-content\") pod \"0d6bb27d-3ad7-49b1-8a17-2ca709952079\" (UID: \"0d6bb27d-3ad7-49b1-8a17-2ca709952079\") " Feb 28 04:48:11 crc kubenswrapper[5014]: I0228 04:48:11.947327 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftqd5\" (UniqueName: \"kubernetes.io/projected/0d6bb27d-3ad7-49b1-8a17-2ca709952079-kube-api-access-ftqd5\") pod \"0d6bb27d-3ad7-49b1-8a17-2ca709952079\" (UID: \"0d6bb27d-3ad7-49b1-8a17-2ca709952079\") " Feb 28 04:48:11 crc kubenswrapper[5014]: I0228 04:48:11.947397 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d6bb27d-3ad7-49b1-8a17-2ca709952079-utilities\") pod \"0d6bb27d-3ad7-49b1-8a17-2ca709952079\" (UID: \"0d6bb27d-3ad7-49b1-8a17-2ca709952079\") " Feb 28 04:48:11 crc kubenswrapper[5014]: I0228 04:48:11.948517 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d6bb27d-3ad7-49b1-8a17-2ca709952079-utilities" (OuterVolumeSpecName: "utilities") pod "0d6bb27d-3ad7-49b1-8a17-2ca709952079" (UID: "0d6bb27d-3ad7-49b1-8a17-2ca709952079"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:48:11 crc kubenswrapper[5014]: I0228 04:48:11.955151 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d6bb27d-3ad7-49b1-8a17-2ca709952079-kube-api-access-ftqd5" (OuterVolumeSpecName: "kube-api-access-ftqd5") pod "0d6bb27d-3ad7-49b1-8a17-2ca709952079" (UID: "0d6bb27d-3ad7-49b1-8a17-2ca709952079"). InnerVolumeSpecName "kube-api-access-ftqd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:48:11 crc kubenswrapper[5014]: I0228 04:48:11.963492 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftqd5\" (UniqueName: \"kubernetes.io/projected/0d6bb27d-3ad7-49b1-8a17-2ca709952079-kube-api-access-ftqd5\") on node \"crc\" DevicePath \"\"" Feb 28 04:48:11 crc kubenswrapper[5014]: I0228 04:48:11.963560 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d6bb27d-3ad7-49b1-8a17-2ca709952079-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:48:12 crc kubenswrapper[5014]: I0228 04:48:12.100423 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d6bb27d-3ad7-49b1-8a17-2ca709952079-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d6bb27d-3ad7-49b1-8a17-2ca709952079" (UID: "0d6bb27d-3ad7-49b1-8a17-2ca709952079"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:48:12 crc kubenswrapper[5014]: I0228 04:48:12.169302 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d6bb27d-3ad7-49b1-8a17-2ca709952079-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:48:12 crc kubenswrapper[5014]: I0228 04:48:12.431507 5014 generic.go:334] "Generic (PLEG): container finished" podID="0d6bb27d-3ad7-49b1-8a17-2ca709952079" containerID="583ecc7199e06c9ac9e8ca52566c45c19b73acc5e067f3362aa5046eb5ff2da3" exitCode=0 Feb 28 04:48:12 crc kubenswrapper[5014]: I0228 04:48:12.431607 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8vmv" event={"ID":"0d6bb27d-3ad7-49b1-8a17-2ca709952079","Type":"ContainerDied","Data":"583ecc7199e06c9ac9e8ca52566c45c19b73acc5e067f3362aa5046eb5ff2da3"} Feb 28 04:48:12 crc kubenswrapper[5014]: I0228 04:48:12.431667 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m8vmv" event={"ID":"0d6bb27d-3ad7-49b1-8a17-2ca709952079","Type":"ContainerDied","Data":"4c9f7ec94f45ea2cd698e5dffe1588afd56f5e8223c3e4f5a6651a3817b2bafb"} Feb 28 04:48:12 crc kubenswrapper[5014]: I0228 04:48:12.431687 5014 scope.go:117] "RemoveContainer" containerID="583ecc7199e06c9ac9e8ca52566c45c19b73acc5e067f3362aa5046eb5ff2da3" Feb 28 04:48:12 crc kubenswrapper[5014]: I0228 04:48:12.432266 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m8vmv" Feb 28 04:48:12 crc kubenswrapper[5014]: I0228 04:48:12.451161 5014 scope.go:117] "RemoveContainer" containerID="d91672c33f9e7404ee70ada9828fc03397541a9593752f14b3dfa9ab8c92f7bb" Feb 28 04:48:12 crc kubenswrapper[5014]: I0228 04:48:12.459681 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m8vmv"] Feb 28 04:48:12 crc kubenswrapper[5014]: I0228 04:48:12.466650 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m8vmv"] Feb 28 04:48:12 crc kubenswrapper[5014]: I0228 04:48:12.472963 5014 scope.go:117] "RemoveContainer" containerID="e132d4106646d071c24a939b4132e2142c413514fedf6f5bb09dcaed77454e1e" Feb 28 04:48:12 crc kubenswrapper[5014]: I0228 04:48:12.492951 5014 scope.go:117] "RemoveContainer" containerID="583ecc7199e06c9ac9e8ca52566c45c19b73acc5e067f3362aa5046eb5ff2da3" Feb 28 04:48:12 crc kubenswrapper[5014]: E0228 04:48:12.493375 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"583ecc7199e06c9ac9e8ca52566c45c19b73acc5e067f3362aa5046eb5ff2da3\": container with ID starting with 583ecc7199e06c9ac9e8ca52566c45c19b73acc5e067f3362aa5046eb5ff2da3 not found: ID does not exist" containerID="583ecc7199e06c9ac9e8ca52566c45c19b73acc5e067f3362aa5046eb5ff2da3" Feb 28 04:48:12 crc kubenswrapper[5014]: I0228 04:48:12.493422 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"583ecc7199e06c9ac9e8ca52566c45c19b73acc5e067f3362aa5046eb5ff2da3"} err="failed to get container status \"583ecc7199e06c9ac9e8ca52566c45c19b73acc5e067f3362aa5046eb5ff2da3\": rpc error: code = NotFound desc = could not find container \"583ecc7199e06c9ac9e8ca52566c45c19b73acc5e067f3362aa5046eb5ff2da3\": container with ID starting with 583ecc7199e06c9ac9e8ca52566c45c19b73acc5e067f3362aa5046eb5ff2da3 not found: ID does not exist" Feb 28 04:48:12 crc kubenswrapper[5014]: I0228 04:48:12.493448 5014 scope.go:117] "RemoveContainer" containerID="d91672c33f9e7404ee70ada9828fc03397541a9593752f14b3dfa9ab8c92f7bb" Feb 28 04:48:12 crc kubenswrapper[5014]: E0228 04:48:12.493670 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d91672c33f9e7404ee70ada9828fc03397541a9593752f14b3dfa9ab8c92f7bb\": container with ID starting with d91672c33f9e7404ee70ada9828fc03397541a9593752f14b3dfa9ab8c92f7bb not found: ID does not exist" containerID="d91672c33f9e7404ee70ada9828fc03397541a9593752f14b3dfa9ab8c92f7bb" Feb 28 04:48:12 crc kubenswrapper[5014]: I0228 04:48:12.493689 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d91672c33f9e7404ee70ada9828fc03397541a9593752f14b3dfa9ab8c92f7bb"} err="failed to get container status \"d91672c33f9e7404ee70ada9828fc03397541a9593752f14b3dfa9ab8c92f7bb\": rpc error: code = NotFound desc = could not find container \"d91672c33f9e7404ee70ada9828fc03397541a9593752f14b3dfa9ab8c92f7bb\": container with ID starting with d91672c33f9e7404ee70ada9828fc03397541a9593752f14b3dfa9ab8c92f7bb not found: ID does not exist" Feb 28 04:48:12 crc kubenswrapper[5014]: I0228 04:48:12.493729 5014 scope.go:117] "RemoveContainer" containerID="e132d4106646d071c24a939b4132e2142c413514fedf6f5bb09dcaed77454e1e" Feb 28 04:48:12 crc kubenswrapper[5014]: E0228 04:48:12.493908 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e132d4106646d071c24a939b4132e2142c413514fedf6f5bb09dcaed77454e1e\": container with ID starting with e132d4106646d071c24a939b4132e2142c413514fedf6f5bb09dcaed77454e1e not found: ID does not exist" containerID="e132d4106646d071c24a939b4132e2142c413514fedf6f5bb09dcaed77454e1e" Feb 28 04:48:12 crc kubenswrapper[5014]: I0228 04:48:12.493924 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e132d4106646d071c24a939b4132e2142c413514fedf6f5bb09dcaed77454e1e"} err="failed to get container status \"e132d4106646d071c24a939b4132e2142c413514fedf6f5bb09dcaed77454e1e\": rpc error: code = NotFound desc = could not find container \"e132d4106646d071c24a939b4132e2142c413514fedf6f5bb09dcaed77454e1e\": container with ID starting with e132d4106646d071c24a939b4132e2142c413514fedf6f5bb09dcaed77454e1e not found: ID does not exist" Feb 28 04:48:14 crc kubenswrapper[5014]: I0228 04:48:14.179109 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d6bb27d-3ad7-49b1-8a17-2ca709952079" path="/var/lib/kubelet/pods/0d6bb27d-3ad7-49b1-8a17-2ca709952079/volumes" Feb 28 04:48:23 crc kubenswrapper[5014]: I0228 04:48:23.890921 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-75b5fcbdc5-txj9m" Feb 28 04:48:32 crc kubenswrapper[5014]: I0228 04:48:32.774664 5014 scope.go:117] "RemoveContainer" containerID="78d3d44955358c2e893ac7c821f6f56971aa4fe61590d43765150483e9e63604" Feb 28 04:48:43 crc kubenswrapper[5014]: I0228 04:48:43.429079 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-c97d79cb8-9k7r6" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.141707 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-rkp2w"] Feb 28 04:48:44 crc kubenswrapper[5014]: E0228 04:48:44.142711 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="337102fc-d918-4401-a98b-0903531566b9" containerName="oc" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.142729 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="337102fc-d918-4401-a98b-0903531566b9" containerName="oc" Feb 28 04:48:44 crc kubenswrapper[5014]: E0228 04:48:44.142755 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d6bb27d-3ad7-49b1-8a17-2ca709952079" containerName="extract-content" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.142763 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d6bb27d-3ad7-49b1-8a17-2ca709952079" containerName="extract-content" Feb 28 04:48:44 crc kubenswrapper[5014]: E0228 04:48:44.142775 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d6bb27d-3ad7-49b1-8a17-2ca709952079" containerName="extract-utilities" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.142783 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d6bb27d-3ad7-49b1-8a17-2ca709952079" containerName="extract-utilities" Feb 28 04:48:44 crc kubenswrapper[5014]: E0228 04:48:44.142798 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d6bb27d-3ad7-49b1-8a17-2ca709952079" containerName="registry-server" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.142824 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d6bb27d-3ad7-49b1-8a17-2ca709952079" containerName="registry-server" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.142982 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d6bb27d-3ad7-49b1-8a17-2ca709952079" containerName="registry-server" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.142998 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="337102fc-d918-4401-a98b-0903531566b9" containerName="oc" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.145615 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.150437 5014 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-2wvk5" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.152478 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.152990 5014 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.153443 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-vwrdt"] Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.155953 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-vwrdt" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.160255 5014 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.166651 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-vwrdt"] Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.251180 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-v6tb4"] Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.256031 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-v6tb4" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.258381 5014 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.259471 5014 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.259541 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.264539 5014 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-xxq5m" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.274773 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-86ddb6bd46-tl2qx"] Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.275876 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-tl2qx" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.278198 5014 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.281984 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-tl2qx"] Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.308241 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1916ff1-d765-4133-8db7-50b8c6c9d3da-cert\") pod \"frr-k8s-webhook-server-7f989f654f-vwrdt\" (UID: \"d1916ff1-d765-4133-8db7-50b8c6c9d3da\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-vwrdt" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.308298 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd8eb09e-7a57-4b01-b09c-519bbca4c5ed-metrics-certs\") pod \"frr-k8s-rkp2w\" (UID: \"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed\") " pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.308323 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/cd8eb09e-7a57-4b01-b09c-519bbca4c5ed-reloader\") pod \"frr-k8s-rkp2w\" (UID: \"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed\") " pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.308400 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrt8h\" (UniqueName: \"kubernetes.io/projected/cd8eb09e-7a57-4b01-b09c-519bbca4c5ed-kube-api-access-mrt8h\") pod \"frr-k8s-rkp2w\" (UID: \"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed\") " pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.308417 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/cd8eb09e-7a57-4b01-b09c-519bbca4c5ed-metrics\") pod \"frr-k8s-rkp2w\" (UID: \"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed\") " pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.308438 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/cd8eb09e-7a57-4b01-b09c-519bbca4c5ed-frr-conf\") pod \"frr-k8s-rkp2w\" (UID: \"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed\") " pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.308456 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/cd8eb09e-7a57-4b01-b09c-519bbca4c5ed-frr-sockets\") pod \"frr-k8s-rkp2w\" (UID: \"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed\") " pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.308488 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/cd8eb09e-7a57-4b01-b09c-519bbca4c5ed-frr-startup\") pod \"frr-k8s-rkp2w\" (UID: \"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed\") " pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.308524 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t44z\" (UniqueName: \"kubernetes.io/projected/d1916ff1-d765-4133-8db7-50b8c6c9d3da-kube-api-access-6t44z\") pod \"frr-k8s-webhook-server-7f989f654f-vwrdt\" (UID: \"d1916ff1-d765-4133-8db7-50b8c6c9d3da\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-vwrdt" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.409715 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52613a39-487f-4a3e-b2fb-97e969552377-cert\") pod \"controller-86ddb6bd46-tl2qx\" (UID: \"52613a39-487f-4a3e-b2fb-97e969552377\") " pod="metallb-system/controller-86ddb6bd46-tl2qx" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.409965 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t44z\" (UniqueName: \"kubernetes.io/projected/d1916ff1-d765-4133-8db7-50b8c6c9d3da-kube-api-access-6t44z\") pod \"frr-k8s-webhook-server-7f989f654f-vwrdt\" (UID: \"d1916ff1-d765-4133-8db7-50b8c6c9d3da\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-vwrdt" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.410070 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4e21c24c-ac78-4bff-863f-dfd7b10d0c7a-memberlist\") pod \"speaker-v6tb4\" (UID: \"4e21c24c-ac78-4bff-863f-dfd7b10d0c7a\") " pod="metallb-system/speaker-v6tb4" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.410155 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ljcq\" (UniqueName: \"kubernetes.io/projected/4e21c24c-ac78-4bff-863f-dfd7b10d0c7a-kube-api-access-6ljcq\") pod \"speaker-v6tb4\" (UID: \"4e21c24c-ac78-4bff-863f-dfd7b10d0c7a\") " pod="metallb-system/speaker-v6tb4" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.410261 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1916ff1-d765-4133-8db7-50b8c6c9d3da-cert\") pod \"frr-k8s-webhook-server-7f989f654f-vwrdt\" (UID: \"d1916ff1-d765-4133-8db7-50b8c6c9d3da\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-vwrdt" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.410360 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd8eb09e-7a57-4b01-b09c-519bbca4c5ed-metrics-certs\") pod \"frr-k8s-rkp2w\" (UID: \"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed\") " pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.410448 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/cd8eb09e-7a57-4b01-b09c-519bbca4c5ed-reloader\") pod \"frr-k8s-rkp2w\" (UID: \"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed\") " pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.410532 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8zw5\" (UniqueName: \"kubernetes.io/projected/52613a39-487f-4a3e-b2fb-97e969552377-kube-api-access-k8zw5\") pod \"controller-86ddb6bd46-tl2qx\" (UID: \"52613a39-487f-4a3e-b2fb-97e969552377\") " pod="metallb-system/controller-86ddb6bd46-tl2qx" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.410619 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/cd8eb09e-7a57-4b01-b09c-519bbca4c5ed-metrics\") pod \"frr-k8s-rkp2w\" (UID: \"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed\") " pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.410700 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrt8h\" (UniqueName: \"kubernetes.io/projected/cd8eb09e-7a57-4b01-b09c-519bbca4c5ed-kube-api-access-mrt8h\") pod \"frr-k8s-rkp2w\" (UID: \"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed\") " pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.410777 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4e21c24c-ac78-4bff-863f-dfd7b10d0c7a-metallb-excludel2\") pod \"speaker-v6tb4\" (UID: \"4e21c24c-ac78-4bff-863f-dfd7b10d0c7a\") " pod="metallb-system/speaker-v6tb4" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.410879 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/cd8eb09e-7a57-4b01-b09c-519bbca4c5ed-frr-conf\") pod \"frr-k8s-rkp2w\" (UID: \"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed\") " pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.410977 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/cd8eb09e-7a57-4b01-b09c-519bbca4c5ed-frr-sockets\") pod \"frr-k8s-rkp2w\" (UID: \"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed\") " pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.410886 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/cd8eb09e-7a57-4b01-b09c-519bbca4c5ed-reloader\") pod \"frr-k8s-rkp2w\" (UID: \"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed\") " pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.411019 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/cd8eb09e-7a57-4b01-b09c-519bbca4c5ed-metrics\") pod \"frr-k8s-rkp2w\" (UID: \"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed\") " pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.411228 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/cd8eb09e-7a57-4b01-b09c-519bbca4c5ed-frr-conf\") pod \"frr-k8s-rkp2w\" (UID: \"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed\") " pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.411082 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52613a39-487f-4a3e-b2fb-97e969552377-metrics-certs\") pod \"controller-86ddb6bd46-tl2qx\" (UID: \"52613a39-487f-4a3e-b2fb-97e969552377\") " pod="metallb-system/controller-86ddb6bd46-tl2qx" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.411265 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/cd8eb09e-7a57-4b01-b09c-519bbca4c5ed-frr-sockets\") pod \"frr-k8s-rkp2w\" (UID: \"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed\") " pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.411299 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e21c24c-ac78-4bff-863f-dfd7b10d0c7a-metrics-certs\") pod \"speaker-v6tb4\" (UID: \"4e21c24c-ac78-4bff-863f-dfd7b10d0c7a\") " pod="metallb-system/speaker-v6tb4" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.411327 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/cd8eb09e-7a57-4b01-b09c-519bbca4c5ed-frr-startup\") pod \"frr-k8s-rkp2w\" (UID: \"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed\") " pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.412513 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/cd8eb09e-7a57-4b01-b09c-519bbca4c5ed-frr-startup\") pod \"frr-k8s-rkp2w\" (UID: \"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed\") " pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.415984 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d1916ff1-d765-4133-8db7-50b8c6c9d3da-cert\") pod \"frr-k8s-webhook-server-7f989f654f-vwrdt\" (UID: \"d1916ff1-d765-4133-8db7-50b8c6c9d3da\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-vwrdt" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.418302 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cd8eb09e-7a57-4b01-b09c-519bbca4c5ed-metrics-certs\") pod \"frr-k8s-rkp2w\" (UID: \"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed\") " pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.445460 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrt8h\" (UniqueName: \"kubernetes.io/projected/cd8eb09e-7a57-4b01-b09c-519bbca4c5ed-kube-api-access-mrt8h\") pod \"frr-k8s-rkp2w\" (UID: \"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed\") " pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.445607 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t44z\" (UniqueName: \"kubernetes.io/projected/d1916ff1-d765-4133-8db7-50b8c6c9d3da-kube-api-access-6t44z\") pod \"frr-k8s-webhook-server-7f989f654f-vwrdt\" (UID: \"d1916ff1-d765-4133-8db7-50b8c6c9d3da\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-vwrdt" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.475073 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.488552 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-vwrdt" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.513094 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52613a39-487f-4a3e-b2fb-97e969552377-metrics-certs\") pod \"controller-86ddb6bd46-tl2qx\" (UID: \"52613a39-487f-4a3e-b2fb-97e969552377\") " pod="metallb-system/controller-86ddb6bd46-tl2qx" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.513134 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e21c24c-ac78-4bff-863f-dfd7b10d0c7a-metrics-certs\") pod \"speaker-v6tb4\" (UID: \"4e21c24c-ac78-4bff-863f-dfd7b10d0c7a\") " pod="metallb-system/speaker-v6tb4" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.513157 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52613a39-487f-4a3e-b2fb-97e969552377-cert\") pod \"controller-86ddb6bd46-tl2qx\" (UID: \"52613a39-487f-4a3e-b2fb-97e969552377\") " pod="metallb-system/controller-86ddb6bd46-tl2qx" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.513190 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4e21c24c-ac78-4bff-863f-dfd7b10d0c7a-memberlist\") pod \"speaker-v6tb4\" (UID: \"4e21c24c-ac78-4bff-863f-dfd7b10d0c7a\") " pod="metallb-system/speaker-v6tb4" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.513211 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ljcq\" (UniqueName: \"kubernetes.io/projected/4e21c24c-ac78-4bff-863f-dfd7b10d0c7a-kube-api-access-6ljcq\") pod \"speaker-v6tb4\" (UID: \"4e21c24c-ac78-4bff-863f-dfd7b10d0c7a\") " pod="metallb-system/speaker-v6tb4" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.513239 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8zw5\" (UniqueName: \"kubernetes.io/projected/52613a39-487f-4a3e-b2fb-97e969552377-kube-api-access-k8zw5\") pod \"controller-86ddb6bd46-tl2qx\" (UID: \"52613a39-487f-4a3e-b2fb-97e969552377\") " pod="metallb-system/controller-86ddb6bd46-tl2qx" Feb 28 04:48:44 crc kubenswrapper[5014]: E0228 04:48:44.513244 5014 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Feb 28 04:48:44 crc kubenswrapper[5014]: E0228 04:48:44.513333 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52613a39-487f-4a3e-b2fb-97e969552377-metrics-certs podName:52613a39-487f-4a3e-b2fb-97e969552377 nodeName:}" failed. No retries permitted until 2026-02-28 04:48:45.01330688 +0000 UTC m=+913.683432860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/52613a39-487f-4a3e-b2fb-97e969552377-metrics-certs") pod "controller-86ddb6bd46-tl2qx" (UID: "52613a39-487f-4a3e-b2fb-97e969552377") : secret "controller-certs-secret" not found Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.513262 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4e21c24c-ac78-4bff-863f-dfd7b10d0c7a-metallb-excludel2\") pod \"speaker-v6tb4\" (UID: \"4e21c24c-ac78-4bff-863f-dfd7b10d0c7a\") " pod="metallb-system/speaker-v6tb4" Feb 28 04:48:44 crc kubenswrapper[5014]: E0228 04:48:44.513763 5014 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 28 04:48:44 crc kubenswrapper[5014]: E0228 04:48:44.513824 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e21c24c-ac78-4bff-863f-dfd7b10d0c7a-memberlist podName:4e21c24c-ac78-4bff-863f-dfd7b10d0c7a nodeName:}" failed. No retries permitted until 2026-02-28 04:48:45.013797933 +0000 UTC m=+913.683923843 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4e21c24c-ac78-4bff-863f-dfd7b10d0c7a-memberlist") pod "speaker-v6tb4" (UID: "4e21c24c-ac78-4bff-863f-dfd7b10d0c7a") : secret "metallb-memberlist" not found Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.513977 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/4e21c24c-ac78-4bff-863f-dfd7b10d0c7a-metallb-excludel2\") pod \"speaker-v6tb4\" (UID: \"4e21c24c-ac78-4bff-863f-dfd7b10d0c7a\") " pod="metallb-system/speaker-v6tb4" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.516328 5014 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.524065 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4e21c24c-ac78-4bff-863f-dfd7b10d0c7a-metrics-certs\") pod \"speaker-v6tb4\" (UID: \"4e21c24c-ac78-4bff-863f-dfd7b10d0c7a\") " pod="metallb-system/speaker-v6tb4" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.529093 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/52613a39-487f-4a3e-b2fb-97e969552377-cert\") pod \"controller-86ddb6bd46-tl2qx\" (UID: \"52613a39-487f-4a3e-b2fb-97e969552377\") " pod="metallb-system/controller-86ddb6bd46-tl2qx" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.530301 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ljcq\" (UniqueName: \"kubernetes.io/projected/4e21c24c-ac78-4bff-863f-dfd7b10d0c7a-kube-api-access-6ljcq\") pod \"speaker-v6tb4\" (UID: \"4e21c24c-ac78-4bff-863f-dfd7b10d0c7a\") " pod="metallb-system/speaker-v6tb4" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.536259 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8zw5\" (UniqueName: \"kubernetes.io/projected/52613a39-487f-4a3e-b2fb-97e969552377-kube-api-access-k8zw5\") pod \"controller-86ddb6bd46-tl2qx\" (UID: \"52613a39-487f-4a3e-b2fb-97e969552377\") " pod="metallb-system/controller-86ddb6bd46-tl2qx" Feb 28 04:48:44 crc kubenswrapper[5014]: I0228 04:48:44.913427 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-vwrdt"] Feb 28 04:48:44 crc kubenswrapper[5014]: W0228 04:48:44.923829 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1916ff1_d765_4133_8db7_50b8c6c9d3da.slice/crio-1228601e8ce3987b4290ddd898b869a60c77fbf32f3ab495dc295f4c6262df5b WatchSource:0}: Error finding container 1228601e8ce3987b4290ddd898b869a60c77fbf32f3ab495dc295f4c6262df5b: Status 404 returned error can't find the container with id 1228601e8ce3987b4290ddd898b869a60c77fbf32f3ab495dc295f4c6262df5b Feb 28 04:48:45 crc kubenswrapper[5014]: I0228 04:48:45.020992 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52613a39-487f-4a3e-b2fb-97e969552377-metrics-certs\") pod \"controller-86ddb6bd46-tl2qx\" (UID: \"52613a39-487f-4a3e-b2fb-97e969552377\") " pod="metallb-system/controller-86ddb6bd46-tl2qx" Feb 28 04:48:45 crc kubenswrapper[5014]: I0228 04:48:45.021078 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4e21c24c-ac78-4bff-863f-dfd7b10d0c7a-memberlist\") pod \"speaker-v6tb4\" (UID: \"4e21c24c-ac78-4bff-863f-dfd7b10d0c7a\") " pod="metallb-system/speaker-v6tb4" Feb 28 04:48:45 crc kubenswrapper[5014]: E0228 04:48:45.021231 5014 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 28 04:48:45 crc kubenswrapper[5014]: E0228 04:48:45.021306 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4e21c24c-ac78-4bff-863f-dfd7b10d0c7a-memberlist podName:4e21c24c-ac78-4bff-863f-dfd7b10d0c7a nodeName:}" failed. No retries permitted until 2026-02-28 04:48:46.021279486 +0000 UTC m=+914.691405406 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/4e21c24c-ac78-4bff-863f-dfd7b10d0c7a-memberlist") pod "speaker-v6tb4" (UID: "4e21c24c-ac78-4bff-863f-dfd7b10d0c7a") : secret "metallb-memberlist" not found Feb 28 04:48:45 crc kubenswrapper[5014]: I0228 04:48:45.027530 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52613a39-487f-4a3e-b2fb-97e969552377-metrics-certs\") pod \"controller-86ddb6bd46-tl2qx\" (UID: \"52613a39-487f-4a3e-b2fb-97e969552377\") " pod="metallb-system/controller-86ddb6bd46-tl2qx" Feb 28 04:48:45 crc kubenswrapper[5014]: I0228 04:48:45.205013 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-tl2qx" Feb 28 04:48:45 crc kubenswrapper[5014]: I0228 04:48:45.461190 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-tl2qx"] Feb 28 04:48:45 crc kubenswrapper[5014]: W0228 04:48:45.467396 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod52613a39_487f_4a3e_b2fb_97e969552377.slice/crio-a774b2cf803125a007dae962e50f908be990527aa96ac9177220e240b6efb2c9 WatchSource:0}: Error finding container a774b2cf803125a007dae962e50f908be990527aa96ac9177220e240b6efb2c9: Status 404 returned error can't find the container with id a774b2cf803125a007dae962e50f908be990527aa96ac9177220e240b6efb2c9 Feb 28 04:48:45 crc kubenswrapper[5014]: I0228 04:48:45.647351 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rkp2w" event={"ID":"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed","Type":"ContainerStarted","Data":"ebc434517127cc14089a0324ab0bee92798292c4ec3ff6f581e5e12de8507f55"} Feb 28 04:48:45 crc kubenswrapper[5014]: I0228 04:48:45.649405 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-tl2qx" event={"ID":"52613a39-487f-4a3e-b2fb-97e969552377","Type":"ContainerStarted","Data":"75dcbaec4aa05c860a04b418443d2112570cdb710944c5fb2432145a6f764b47"} Feb 28 04:48:45 crc kubenswrapper[5014]: I0228 04:48:45.649456 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-tl2qx" event={"ID":"52613a39-487f-4a3e-b2fb-97e969552377","Type":"ContainerStarted","Data":"a774b2cf803125a007dae962e50f908be990527aa96ac9177220e240b6efb2c9"} Feb 28 04:48:45 crc kubenswrapper[5014]: I0228 04:48:45.650474 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-vwrdt" event={"ID":"d1916ff1-d765-4133-8db7-50b8c6c9d3da","Type":"ContainerStarted","Data":"1228601e8ce3987b4290ddd898b869a60c77fbf32f3ab495dc295f4c6262df5b"} Feb 28 04:48:46 crc kubenswrapper[5014]: I0228 04:48:46.046961 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4e21c24c-ac78-4bff-863f-dfd7b10d0c7a-memberlist\") pod \"speaker-v6tb4\" (UID: \"4e21c24c-ac78-4bff-863f-dfd7b10d0c7a\") " pod="metallb-system/speaker-v6tb4" Feb 28 04:48:46 crc kubenswrapper[5014]: I0228 04:48:46.067390 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/4e21c24c-ac78-4bff-863f-dfd7b10d0c7a-memberlist\") pod \"speaker-v6tb4\" (UID: \"4e21c24c-ac78-4bff-863f-dfd7b10d0c7a\") " pod="metallb-system/speaker-v6tb4" Feb 28 04:48:46 crc kubenswrapper[5014]: I0228 04:48:46.076271 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-v6tb4" Feb 28 04:48:46 crc kubenswrapper[5014]: W0228 04:48:46.096958 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e21c24c_ac78_4bff_863f_dfd7b10d0c7a.slice/crio-91b08de6622d8ad56b52543d488883c1b24a801ec062e0b6e8e2bf796481b3b1 WatchSource:0}: Error finding container 91b08de6622d8ad56b52543d488883c1b24a801ec062e0b6e8e2bf796481b3b1: Status 404 returned error can't find the container with id 91b08de6622d8ad56b52543d488883c1b24a801ec062e0b6e8e2bf796481b3b1 Feb 28 04:48:46 crc kubenswrapper[5014]: I0228 04:48:46.659042 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-v6tb4" event={"ID":"4e21c24c-ac78-4bff-863f-dfd7b10d0c7a","Type":"ContainerStarted","Data":"1573286d0d7687e850e16e619f65d0525d4ce3b74c39b28940871b746031781d"} Feb 28 04:48:46 crc kubenswrapper[5014]: I0228 04:48:46.659430 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-v6tb4" event={"ID":"4e21c24c-ac78-4bff-863f-dfd7b10d0c7a","Type":"ContainerStarted","Data":"d1140ffaf5c5cbffcc70f91452118c41b34780e1066ed5e22304429ed84064bd"} Feb 28 04:48:46 crc kubenswrapper[5014]: I0228 04:48:46.659449 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-v6tb4" event={"ID":"4e21c24c-ac78-4bff-863f-dfd7b10d0c7a","Type":"ContainerStarted","Data":"91b08de6622d8ad56b52543d488883c1b24a801ec062e0b6e8e2bf796481b3b1"} Feb 28 04:48:46 crc kubenswrapper[5014]: I0228 04:48:46.659609 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-v6tb4" Feb 28 04:48:46 crc kubenswrapper[5014]: I0228 04:48:46.664698 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-tl2qx" event={"ID":"52613a39-487f-4a3e-b2fb-97e969552377","Type":"ContainerStarted","Data":"09d33c44770eb1f751d66109d8784ddb83006f868773c2d3a213a8c8eb418a11"} Feb 28 04:48:46 crc kubenswrapper[5014]: I0228 04:48:46.664853 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-tl2qx" Feb 28 04:48:46 crc kubenswrapper[5014]: I0228 04:48:46.679143 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-v6tb4" podStartSLOduration=2.6791230390000003 podStartE2EDuration="2.679123039s" podCreationTimestamp="2026-02-28 04:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:48:46.676293993 +0000 UTC m=+915.346419903" watchObservedRunningTime="2026-02-28 04:48:46.679123039 +0000 UTC m=+915.349248949" Feb 28 04:48:46 crc kubenswrapper[5014]: I0228 04:48:46.694717 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-86ddb6bd46-tl2qx" podStartSLOduration=2.69469884 podStartE2EDuration="2.69469884s" podCreationTimestamp="2026-02-28 04:48:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:48:46.691868814 +0000 UTC m=+915.361994724" watchObservedRunningTime="2026-02-28 04:48:46.69469884 +0000 UTC m=+915.364824750" Feb 28 04:48:52 crc kubenswrapper[5014]: I0228 04:48:52.701879 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-vwrdt" event={"ID":"d1916ff1-d765-4133-8db7-50b8c6c9d3da","Type":"ContainerStarted","Data":"29e360d10c57a4236d11abd90cf90285e6d502dd724ddfbc8a71b75a044edde0"} Feb 28 04:48:52 crc kubenswrapper[5014]: I0228 04:48:52.702731 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-vwrdt" Feb 28 04:48:52 crc kubenswrapper[5014]: I0228 04:48:52.708587 5014 generic.go:334] "Generic (PLEG): container finished" podID="cd8eb09e-7a57-4b01-b09c-519bbca4c5ed" containerID="783a8823004798e426d307bbcd9ebe12667c8fc5a97b5ba4eb1e9b26b15e1536" exitCode=0 Feb 28 04:48:52 crc kubenswrapper[5014]: I0228 04:48:52.708660 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rkp2w" event={"ID":"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed","Type":"ContainerDied","Data":"783a8823004798e426d307bbcd9ebe12667c8fc5a97b5ba4eb1e9b26b15e1536"} Feb 28 04:48:52 crc kubenswrapper[5014]: I0228 04:48:52.742721 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-vwrdt" podStartSLOduration=1.917614898 podStartE2EDuration="8.742688462s" podCreationTimestamp="2026-02-28 04:48:44 +0000 UTC" firstStartedPulling="2026-02-28 04:48:44.927464052 +0000 UTC m=+913.597589962" lastFinishedPulling="2026-02-28 04:48:51.752537616 +0000 UTC m=+920.422663526" observedRunningTime="2026-02-28 04:48:52.729535787 +0000 UTC m=+921.399661747" watchObservedRunningTime="2026-02-28 04:48:52.742688462 +0000 UTC m=+921.412814412" Feb 28 04:48:53 crc kubenswrapper[5014]: I0228 04:48:53.716977 5014 generic.go:334] "Generic (PLEG): container finished" podID="cd8eb09e-7a57-4b01-b09c-519bbca4c5ed" containerID="44af24020407e165d152c491bccdee7ea5000113a0f1789349d7a2e7f8ef98a7" exitCode=0 Feb 28 04:48:53 crc kubenswrapper[5014]: I0228 04:48:53.717090 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rkp2w" event={"ID":"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed","Type":"ContainerDied","Data":"44af24020407e165d152c491bccdee7ea5000113a0f1789349d7a2e7f8ef98a7"} Feb 28 04:48:54 crc kubenswrapper[5014]: I0228 04:48:54.726379 5014 generic.go:334] "Generic (PLEG): container finished" podID="cd8eb09e-7a57-4b01-b09c-519bbca4c5ed" containerID="541e68b6a8019d1f3c54f10a68e035e8ec374ec7af58bd61b29257da8ab118fd" exitCode=0 Feb 28 04:48:54 crc kubenswrapper[5014]: I0228 04:48:54.726997 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rkp2w" event={"ID":"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed","Type":"ContainerDied","Data":"541e68b6a8019d1f3c54f10a68e035e8ec374ec7af58bd61b29257da8ab118fd"} Feb 28 04:48:55 crc kubenswrapper[5014]: I0228 04:48:55.209219 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-tl2qx" Feb 28 04:48:55 crc kubenswrapper[5014]: I0228 04:48:55.736895 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rkp2w" event={"ID":"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed","Type":"ContainerStarted","Data":"2749dd03c1f8c4ef893482409fce045b222206bcb72e194b44e32207491c286f"} Feb 28 04:48:55 crc kubenswrapper[5014]: I0228 04:48:55.736942 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rkp2w" event={"ID":"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed","Type":"ContainerStarted","Data":"6f5f54e46c76041e7efcb769a7bc88f320224a2bb6365dd0f02fa08a465abec8"} Feb 28 04:48:55 crc kubenswrapper[5014]: I0228 04:48:55.736955 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rkp2w" event={"ID":"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed","Type":"ContainerStarted","Data":"2e548d963d54be333d3489fc66b0d44c1311f8665f8b0909c8786c06976c0480"} Feb 28 04:48:55 crc kubenswrapper[5014]: I0228 04:48:55.736969 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rkp2w" event={"ID":"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed","Type":"ContainerStarted","Data":"64d17b3d391b87db362365944abb7628fd0772204672bdcbac3c558776ebb2e7"} Feb 28 04:48:56 crc kubenswrapper[5014]: I0228 04:48:56.079318 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-v6tb4" Feb 28 04:48:56 crc kubenswrapper[5014]: I0228 04:48:56.751266 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rkp2w" event={"ID":"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed","Type":"ContainerStarted","Data":"718a1ba2f24d22ebffe18da0d2e19115c916a2cc7a42be742898984188bd79f8"} Feb 28 04:48:56 crc kubenswrapper[5014]: I0228 04:48:56.751642 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:56 crc kubenswrapper[5014]: I0228 04:48:56.751663 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-rkp2w" event={"ID":"cd8eb09e-7a57-4b01-b09c-519bbca4c5ed","Type":"ContainerStarted","Data":"335ab89c0da1ce9694c640542e18957195942ab11e72e5d232778ab1111dfa4f"} Feb 28 04:48:56 crc kubenswrapper[5014]: I0228 04:48:56.799703 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-rkp2w" podStartSLOduration=5.697308743 podStartE2EDuration="12.799675556s" podCreationTimestamp="2026-02-28 04:48:44 +0000 UTC" firstStartedPulling="2026-02-28 04:48:44.654004568 +0000 UTC m=+913.324130478" lastFinishedPulling="2026-02-28 04:48:51.756371341 +0000 UTC m=+920.426497291" observedRunningTime="2026-02-28 04:48:56.791403332 +0000 UTC m=+925.461529282" watchObservedRunningTime="2026-02-28 04:48:56.799675556 +0000 UTC m=+925.469801506" Feb 28 04:48:58 crc kubenswrapper[5014]: I0228 04:48:58.860320 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-n28fb"] Feb 28 04:48:58 crc kubenswrapper[5014]: I0228 04:48:58.861665 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n28fb" Feb 28 04:48:58 crc kubenswrapper[5014]: I0228 04:48:58.864195 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-rqfgl" Feb 28 04:48:58 crc kubenswrapper[5014]: I0228 04:48:58.867172 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 28 04:48:58 crc kubenswrapper[5014]: I0228 04:48:58.867851 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 28 04:48:58 crc kubenswrapper[5014]: I0228 04:48:58.881573 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-n28fb"] Feb 28 04:48:58 crc kubenswrapper[5014]: I0228 04:48:58.937792 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p64rv\" (UniqueName: \"kubernetes.io/projected/80ceb56e-fdfb-47ef-8ec8-f3fad173e274-kube-api-access-p64rv\") pod \"openstack-operator-index-n28fb\" (UID: \"80ceb56e-fdfb-47ef-8ec8-f3fad173e274\") " pod="openstack-operators/openstack-operator-index-n28fb" Feb 28 04:48:59 crc kubenswrapper[5014]: I0228 04:48:59.039291 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p64rv\" (UniqueName: \"kubernetes.io/projected/80ceb56e-fdfb-47ef-8ec8-f3fad173e274-kube-api-access-p64rv\") pod \"openstack-operator-index-n28fb\" (UID: \"80ceb56e-fdfb-47ef-8ec8-f3fad173e274\") " pod="openstack-operators/openstack-operator-index-n28fb" Feb 28 04:48:59 crc kubenswrapper[5014]: I0228 04:48:59.061631 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p64rv\" (UniqueName: \"kubernetes.io/projected/80ceb56e-fdfb-47ef-8ec8-f3fad173e274-kube-api-access-p64rv\") pod \"openstack-operator-index-n28fb\" (UID: \"80ceb56e-fdfb-47ef-8ec8-f3fad173e274\") " pod="openstack-operators/openstack-operator-index-n28fb" Feb 28 04:48:59 crc kubenswrapper[5014]: I0228 04:48:59.186204 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n28fb" Feb 28 04:48:59 crc kubenswrapper[5014]: I0228 04:48:59.476022 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:59 crc kubenswrapper[5014]: I0228 04:48:59.555956 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:48:59 crc kubenswrapper[5014]: I0228 04:48:59.614025 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-n28fb"] Feb 28 04:48:59 crc kubenswrapper[5014]: W0228 04:48:59.614792 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80ceb56e_fdfb_47ef_8ec8_f3fad173e274.slice/crio-8e091d5126385f9b2368f4b2789536c38d6d952877ff68330b43a0c5d344ee61 WatchSource:0}: Error finding container 8e091d5126385f9b2368f4b2789536c38d6d952877ff68330b43a0c5d344ee61: Status 404 returned error can't find the container with id 8e091d5126385f9b2368f4b2789536c38d6d952877ff68330b43a0c5d344ee61 Feb 28 04:48:59 crc kubenswrapper[5014]: I0228 04:48:59.775620 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n28fb" event={"ID":"80ceb56e-fdfb-47ef-8ec8-f3fad173e274","Type":"ContainerStarted","Data":"8e091d5126385f9b2368f4b2789536c38d6d952877ff68330b43a0c5d344ee61"} Feb 28 04:49:02 crc kubenswrapper[5014]: I0228 04:49:02.042767 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-n28fb"] Feb 28 04:49:02 crc kubenswrapper[5014]: I0228 04:49:02.649928 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-2vp6x"] Feb 28 04:49:02 crc kubenswrapper[5014]: I0228 04:49:02.656568 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2vp6x" Feb 28 04:49:02 crc kubenswrapper[5014]: I0228 04:49:02.667291 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2vp6x"] Feb 28 04:49:02 crc kubenswrapper[5014]: I0228 04:49:02.798705 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n28fb" event={"ID":"80ceb56e-fdfb-47ef-8ec8-f3fad173e274","Type":"ContainerStarted","Data":"f177cf04c8e5b392262e94fdda48a31614caf95f0916c7257c05566f9367c348"} Feb 28 04:49:02 crc kubenswrapper[5014]: I0228 04:49:02.798923 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-n28fb" podUID="80ceb56e-fdfb-47ef-8ec8-f3fad173e274" containerName="registry-server" containerID="cri-o://f177cf04c8e5b392262e94fdda48a31614caf95f0916c7257c05566f9367c348" gracePeriod=2 Feb 28 04:49:02 crc kubenswrapper[5014]: I0228 04:49:02.804920 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k82dt\" (UniqueName: \"kubernetes.io/projected/55997ed6-05a0-420d-bdaf-5d27ea9e0cf2-kube-api-access-k82dt\") pod \"openstack-operator-index-2vp6x\" (UID: \"55997ed6-05a0-420d-bdaf-5d27ea9e0cf2\") " pod="openstack-operators/openstack-operator-index-2vp6x" Feb 28 04:49:02 crc kubenswrapper[5014]: I0228 04:49:02.827690 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-n28fb" podStartSLOduration=2.437215524 podStartE2EDuration="4.827624267s" podCreationTimestamp="2026-02-28 04:48:58 +0000 UTC" firstStartedPulling="2026-02-28 04:48:59.617330336 +0000 UTC m=+928.287456246" lastFinishedPulling="2026-02-28 04:49:02.007739039 +0000 UTC m=+930.677864989" observedRunningTime="2026-02-28 04:49:02.824439181 +0000 UTC m=+931.494565151" watchObservedRunningTime="2026-02-28 04:49:02.827624267 +0000 UTC m=+931.497750217" Feb 28 04:49:02 crc kubenswrapper[5014]: I0228 04:49:02.906627 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k82dt\" (UniqueName: \"kubernetes.io/projected/55997ed6-05a0-420d-bdaf-5d27ea9e0cf2-kube-api-access-k82dt\") pod \"openstack-operator-index-2vp6x\" (UID: \"55997ed6-05a0-420d-bdaf-5d27ea9e0cf2\") " pod="openstack-operators/openstack-operator-index-2vp6x" Feb 28 04:49:02 crc kubenswrapper[5014]: I0228 04:49:02.941761 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k82dt\" (UniqueName: \"kubernetes.io/projected/55997ed6-05a0-420d-bdaf-5d27ea9e0cf2-kube-api-access-k82dt\") pod \"openstack-operator-index-2vp6x\" (UID: \"55997ed6-05a0-420d-bdaf-5d27ea9e0cf2\") " pod="openstack-operators/openstack-operator-index-2vp6x" Feb 28 04:49:02 crc kubenswrapper[5014]: I0228 04:49:02.986621 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2vp6x" Feb 28 04:49:03 crc kubenswrapper[5014]: I0228 04:49:03.355975 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n28fb" Feb 28 04:49:03 crc kubenswrapper[5014]: I0228 04:49:03.513181 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p64rv\" (UniqueName: \"kubernetes.io/projected/80ceb56e-fdfb-47ef-8ec8-f3fad173e274-kube-api-access-p64rv\") pod \"80ceb56e-fdfb-47ef-8ec8-f3fad173e274\" (UID: \"80ceb56e-fdfb-47ef-8ec8-f3fad173e274\") " Feb 28 04:49:03 crc kubenswrapper[5014]: I0228 04:49:03.519333 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80ceb56e-fdfb-47ef-8ec8-f3fad173e274-kube-api-access-p64rv" (OuterVolumeSpecName: "kube-api-access-p64rv") pod "80ceb56e-fdfb-47ef-8ec8-f3fad173e274" (UID: "80ceb56e-fdfb-47ef-8ec8-f3fad173e274"). InnerVolumeSpecName "kube-api-access-p64rv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:49:03 crc kubenswrapper[5014]: I0228 04:49:03.534440 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2vp6x"] Feb 28 04:49:03 crc kubenswrapper[5014]: W0228 04:49:03.544343 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod55997ed6_05a0_420d_bdaf_5d27ea9e0cf2.slice/crio-f9d7683b010f6b2708d448a3e21762f457f2cd1a2d48ac4ec65b3015d085b6b1 WatchSource:0}: Error finding container f9d7683b010f6b2708d448a3e21762f457f2cd1a2d48ac4ec65b3015d085b6b1: Status 404 returned error can't find the container with id f9d7683b010f6b2708d448a3e21762f457f2cd1a2d48ac4ec65b3015d085b6b1 Feb 28 04:49:03 crc kubenswrapper[5014]: I0228 04:49:03.550668 5014 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 04:49:03 crc kubenswrapper[5014]: I0228 04:49:03.615078 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p64rv\" (UniqueName: \"kubernetes.io/projected/80ceb56e-fdfb-47ef-8ec8-f3fad173e274-kube-api-access-p64rv\") on node \"crc\" DevicePath \"\"" Feb 28 04:49:03 crc kubenswrapper[5014]: I0228 04:49:03.812323 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2vp6x" event={"ID":"55997ed6-05a0-420d-bdaf-5d27ea9e0cf2","Type":"ContainerStarted","Data":"6866fb3447f5f3018d40cea2be47009718b8c356741591915fab22d331204a90"} Feb 28 04:49:03 crc kubenswrapper[5014]: I0228 04:49:03.812402 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2vp6x" event={"ID":"55997ed6-05a0-420d-bdaf-5d27ea9e0cf2","Type":"ContainerStarted","Data":"f9d7683b010f6b2708d448a3e21762f457f2cd1a2d48ac4ec65b3015d085b6b1"} Feb 28 04:49:03 crc kubenswrapper[5014]: I0228 04:49:03.814639 5014 generic.go:334] "Generic (PLEG): container finished" podID="80ceb56e-fdfb-47ef-8ec8-f3fad173e274" containerID="f177cf04c8e5b392262e94fdda48a31614caf95f0916c7257c05566f9367c348" exitCode=0 Feb 28 04:49:03 crc kubenswrapper[5014]: I0228 04:49:03.814736 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n28fb" event={"ID":"80ceb56e-fdfb-47ef-8ec8-f3fad173e274","Type":"ContainerDied","Data":"f177cf04c8e5b392262e94fdda48a31614caf95f0916c7257c05566f9367c348"} Feb 28 04:49:03 crc kubenswrapper[5014]: I0228 04:49:03.814782 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-n28fb" event={"ID":"80ceb56e-fdfb-47ef-8ec8-f3fad173e274","Type":"ContainerDied","Data":"8e091d5126385f9b2368f4b2789536c38d6d952877ff68330b43a0c5d344ee61"} Feb 28 04:49:03 crc kubenswrapper[5014]: I0228 04:49:03.815472 5014 scope.go:117] "RemoveContainer" containerID="f177cf04c8e5b392262e94fdda48a31614caf95f0916c7257c05566f9367c348" Feb 28 04:49:03 crc kubenswrapper[5014]: I0228 04:49:03.815784 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-n28fb" Feb 28 04:49:03 crc kubenswrapper[5014]: I0228 04:49:03.840834 5014 scope.go:117] "RemoveContainer" containerID="f177cf04c8e5b392262e94fdda48a31614caf95f0916c7257c05566f9367c348" Feb 28 04:49:03 crc kubenswrapper[5014]: E0228 04:49:03.843317 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f177cf04c8e5b392262e94fdda48a31614caf95f0916c7257c05566f9367c348\": container with ID starting with f177cf04c8e5b392262e94fdda48a31614caf95f0916c7257c05566f9367c348 not found: ID does not exist" containerID="f177cf04c8e5b392262e94fdda48a31614caf95f0916c7257c05566f9367c348" Feb 28 04:49:03 crc kubenswrapper[5014]: I0228 04:49:03.843382 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f177cf04c8e5b392262e94fdda48a31614caf95f0916c7257c05566f9367c348"} err="failed to get container status \"f177cf04c8e5b392262e94fdda48a31614caf95f0916c7257c05566f9367c348\": rpc error: code = NotFound desc = could not find container \"f177cf04c8e5b392262e94fdda48a31614caf95f0916c7257c05566f9367c348\": container with ID starting with f177cf04c8e5b392262e94fdda48a31614caf95f0916c7257c05566f9367c348 not found: ID does not exist" Feb 28 04:49:03 crc kubenswrapper[5014]: I0228 04:49:03.847305 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-2vp6x" podStartSLOduration=1.790633699 podStartE2EDuration="1.847274029s" podCreationTimestamp="2026-02-28 04:49:02 +0000 UTC" firstStartedPulling="2026-02-28 04:49:03.550159656 +0000 UTC m=+932.220285596" lastFinishedPulling="2026-02-28 04:49:03.606800006 +0000 UTC m=+932.276925926" observedRunningTime="2026-02-28 04:49:03.828746958 +0000 UTC m=+932.498872868" watchObservedRunningTime="2026-02-28 04:49:03.847274029 +0000 UTC m=+932.517399979" Feb 28 04:49:03 crc kubenswrapper[5014]: I0228 04:49:03.864510 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-n28fb"] Feb 28 04:49:03 crc kubenswrapper[5014]: I0228 04:49:03.870145 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-n28fb"] Feb 28 04:49:04 crc kubenswrapper[5014]: I0228 04:49:04.182668 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80ceb56e-fdfb-47ef-8ec8-f3fad173e274" path="/var/lib/kubelet/pods/80ceb56e-fdfb-47ef-8ec8-f3fad173e274/volumes" Feb 28 04:49:04 crc kubenswrapper[5014]: I0228 04:49:04.480671 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-rkp2w" Feb 28 04:49:04 crc kubenswrapper[5014]: I0228 04:49:04.496232 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-vwrdt" Feb 28 04:49:12 crc kubenswrapper[5014]: I0228 04:49:12.987727 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-2vp6x" Feb 28 04:49:12 crc kubenswrapper[5014]: I0228 04:49:12.988288 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-2vp6x" Feb 28 04:49:13 crc kubenswrapper[5014]: I0228 04:49:13.023639 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-2vp6x" Feb 28 04:49:13 crc kubenswrapper[5014]: I0228 04:49:13.921469 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-2vp6x" Feb 28 04:49:20 crc kubenswrapper[5014]: I0228 04:49:20.294897 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm"] Feb 28 04:49:20 crc kubenswrapper[5014]: E0228 04:49:20.295596 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80ceb56e-fdfb-47ef-8ec8-f3fad173e274" containerName="registry-server" Feb 28 04:49:20 crc kubenswrapper[5014]: I0228 04:49:20.295608 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="80ceb56e-fdfb-47ef-8ec8-f3fad173e274" containerName="registry-server" Feb 28 04:49:20 crc kubenswrapper[5014]: I0228 04:49:20.295717 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="80ceb56e-fdfb-47ef-8ec8-f3fad173e274" containerName="registry-server" Feb 28 04:49:20 crc kubenswrapper[5014]: I0228 04:49:20.296488 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm" Feb 28 04:49:20 crc kubenswrapper[5014]: I0228 04:49:20.299900 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-d5z5b" Feb 28 04:49:20 crc kubenswrapper[5014]: I0228 04:49:20.326899 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm"] Feb 28 04:49:20 crc kubenswrapper[5014]: I0228 04:49:20.476049 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffv2c\" (UniqueName: \"kubernetes.io/projected/bf32d2bd-8642-45d7-ae34-876531251b37-kube-api-access-ffv2c\") pod \"e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm\" (UID: \"bf32d2bd-8642-45d7-ae34-876531251b37\") " pod="openstack-operators/e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm" Feb 28 04:49:20 crc kubenswrapper[5014]: I0228 04:49:20.476126 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bf32d2bd-8642-45d7-ae34-876531251b37-bundle\") pod \"e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm\" (UID: \"bf32d2bd-8642-45d7-ae34-876531251b37\") " pod="openstack-operators/e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm" Feb 28 04:49:20 crc kubenswrapper[5014]: I0228 04:49:20.476243 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bf32d2bd-8642-45d7-ae34-876531251b37-util\") pod \"e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm\" (UID: \"bf32d2bd-8642-45d7-ae34-876531251b37\") " pod="openstack-operators/e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm" Feb 28 04:49:20 crc kubenswrapper[5014]: I0228 04:49:20.577488 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bf32d2bd-8642-45d7-ae34-876531251b37-util\") pod \"e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm\" (UID: \"bf32d2bd-8642-45d7-ae34-876531251b37\") " pod="openstack-operators/e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm" Feb 28 04:49:20 crc kubenswrapper[5014]: I0228 04:49:20.577643 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffv2c\" (UniqueName: \"kubernetes.io/projected/bf32d2bd-8642-45d7-ae34-876531251b37-kube-api-access-ffv2c\") pod \"e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm\" (UID: \"bf32d2bd-8642-45d7-ae34-876531251b37\") " pod="openstack-operators/e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm" Feb 28 04:49:20 crc kubenswrapper[5014]: I0228 04:49:20.577717 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bf32d2bd-8642-45d7-ae34-876531251b37-bundle\") pod \"e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm\" (UID: \"bf32d2bd-8642-45d7-ae34-876531251b37\") " pod="openstack-operators/e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm" Feb 28 04:49:20 crc kubenswrapper[5014]: I0228 04:49:20.578543 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bf32d2bd-8642-45d7-ae34-876531251b37-util\") pod \"e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm\" (UID: \"bf32d2bd-8642-45d7-ae34-876531251b37\") " pod="openstack-operators/e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm" Feb 28 04:49:20 crc kubenswrapper[5014]: I0228 04:49:20.578621 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bf32d2bd-8642-45d7-ae34-876531251b37-bundle\") pod \"e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm\" (UID: \"bf32d2bd-8642-45d7-ae34-876531251b37\") " pod="openstack-operators/e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm" Feb 28 04:49:20 crc kubenswrapper[5014]: I0228 04:49:20.603174 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffv2c\" (UniqueName: \"kubernetes.io/projected/bf32d2bd-8642-45d7-ae34-876531251b37-kube-api-access-ffv2c\") pod \"e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm\" (UID: \"bf32d2bd-8642-45d7-ae34-876531251b37\") " pod="openstack-operators/e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm" Feb 28 04:49:20 crc kubenswrapper[5014]: I0228 04:49:20.618261 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm" Feb 28 04:49:20 crc kubenswrapper[5014]: I0228 04:49:20.924653 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm"] Feb 28 04:49:20 crc kubenswrapper[5014]: I0228 04:49:20.942655 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm" event={"ID":"bf32d2bd-8642-45d7-ae34-876531251b37","Type":"ContainerStarted","Data":"0b90283a0d9469aa8a7b8d70914b28b16913727ca15c2097f8310d493624e730"} Feb 28 04:49:21 crc kubenswrapper[5014]: I0228 04:49:21.952100 5014 generic.go:334] "Generic (PLEG): container finished" podID="bf32d2bd-8642-45d7-ae34-876531251b37" containerID="242ceebdd12c2bd118d1b9a026a3feaf1a0d2e760f8a9f62fd404ae92e158e86" exitCode=0 Feb 28 04:49:21 crc kubenswrapper[5014]: I0228 04:49:21.952173 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm" event={"ID":"bf32d2bd-8642-45d7-ae34-876531251b37","Type":"ContainerDied","Data":"242ceebdd12c2bd118d1b9a026a3feaf1a0d2e760f8a9f62fd404ae92e158e86"} Feb 28 04:49:22 crc kubenswrapper[5014]: I0228 04:49:22.960523 5014 generic.go:334] "Generic (PLEG): container finished" podID="bf32d2bd-8642-45d7-ae34-876531251b37" containerID="fb69dd3db00527b7880d8a5bd161e2ef535414b0ef8ec5bce50413792ce2bd66" exitCode=0 Feb 28 04:49:22 crc kubenswrapper[5014]: I0228 04:49:22.960568 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm" event={"ID":"bf32d2bd-8642-45d7-ae34-876531251b37","Type":"ContainerDied","Data":"fb69dd3db00527b7880d8a5bd161e2ef535414b0ef8ec5bce50413792ce2bd66"} Feb 28 04:49:23 crc kubenswrapper[5014]: I0228 04:49:23.969586 5014 generic.go:334] "Generic (PLEG): container finished" podID="bf32d2bd-8642-45d7-ae34-876531251b37" containerID="4247f5dfbcb9bf4cee79a62f36e615eac6c88055f04670b322dc6c6944b5169c" exitCode=0 Feb 28 04:49:23 crc kubenswrapper[5014]: I0228 04:49:23.969672 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm" event={"ID":"bf32d2bd-8642-45d7-ae34-876531251b37","Type":"ContainerDied","Data":"4247f5dfbcb9bf4cee79a62f36e615eac6c88055f04670b322dc6c6944b5169c"} Feb 28 04:49:25 crc kubenswrapper[5014]: I0228 04:49:25.274251 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm" Feb 28 04:49:25 crc kubenswrapper[5014]: I0228 04:49:25.340453 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffv2c\" (UniqueName: \"kubernetes.io/projected/bf32d2bd-8642-45d7-ae34-876531251b37-kube-api-access-ffv2c\") pod \"bf32d2bd-8642-45d7-ae34-876531251b37\" (UID: \"bf32d2bd-8642-45d7-ae34-876531251b37\") " Feb 28 04:49:25 crc kubenswrapper[5014]: I0228 04:49:25.340509 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bf32d2bd-8642-45d7-ae34-876531251b37-util\") pod \"bf32d2bd-8642-45d7-ae34-876531251b37\" (UID: \"bf32d2bd-8642-45d7-ae34-876531251b37\") " Feb 28 04:49:25 crc kubenswrapper[5014]: I0228 04:49:25.340554 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bf32d2bd-8642-45d7-ae34-876531251b37-bundle\") pod \"bf32d2bd-8642-45d7-ae34-876531251b37\" (UID: \"bf32d2bd-8642-45d7-ae34-876531251b37\") " Feb 28 04:49:25 crc kubenswrapper[5014]: I0228 04:49:25.341374 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf32d2bd-8642-45d7-ae34-876531251b37-bundle" (OuterVolumeSpecName: "bundle") pod "bf32d2bd-8642-45d7-ae34-876531251b37" (UID: "bf32d2bd-8642-45d7-ae34-876531251b37"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:49:25 crc kubenswrapper[5014]: I0228 04:49:25.346782 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf32d2bd-8642-45d7-ae34-876531251b37-kube-api-access-ffv2c" (OuterVolumeSpecName: "kube-api-access-ffv2c") pod "bf32d2bd-8642-45d7-ae34-876531251b37" (UID: "bf32d2bd-8642-45d7-ae34-876531251b37"). InnerVolumeSpecName "kube-api-access-ffv2c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:49:25 crc kubenswrapper[5014]: I0228 04:49:25.359027 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf32d2bd-8642-45d7-ae34-876531251b37-util" (OuterVolumeSpecName: "util") pod "bf32d2bd-8642-45d7-ae34-876531251b37" (UID: "bf32d2bd-8642-45d7-ae34-876531251b37"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:49:25 crc kubenswrapper[5014]: I0228 04:49:25.442138 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffv2c\" (UniqueName: \"kubernetes.io/projected/bf32d2bd-8642-45d7-ae34-876531251b37-kube-api-access-ffv2c\") on node \"crc\" DevicePath \"\"" Feb 28 04:49:25 crc kubenswrapper[5014]: I0228 04:49:25.442448 5014 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bf32d2bd-8642-45d7-ae34-876531251b37-util\") on node \"crc\" DevicePath \"\"" Feb 28 04:49:25 crc kubenswrapper[5014]: I0228 04:49:25.442457 5014 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bf32d2bd-8642-45d7-ae34-876531251b37-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:49:25 crc kubenswrapper[5014]: I0228 04:49:25.983518 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm" event={"ID":"bf32d2bd-8642-45d7-ae34-876531251b37","Type":"ContainerDied","Data":"0b90283a0d9469aa8a7b8d70914b28b16913727ca15c2097f8310d493624e730"} Feb 28 04:49:25 crc kubenswrapper[5014]: I0228 04:49:25.983567 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b90283a0d9469aa8a7b8d70914b28b16913727ca15c2097f8310d493624e730" Feb 28 04:49:25 crc kubenswrapper[5014]: I0228 04:49:25.983586 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm" Feb 28 04:49:32 crc kubenswrapper[5014]: I0228 04:49:32.407880 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-dddf4b8c5-khjpf"] Feb 28 04:49:32 crc kubenswrapper[5014]: E0228 04:49:32.408696 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf32d2bd-8642-45d7-ae34-876531251b37" containerName="extract" Feb 28 04:49:32 crc kubenswrapper[5014]: I0228 04:49:32.408712 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf32d2bd-8642-45d7-ae34-876531251b37" containerName="extract" Feb 28 04:49:32 crc kubenswrapper[5014]: E0228 04:49:32.408733 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf32d2bd-8642-45d7-ae34-876531251b37" containerName="util" Feb 28 04:49:32 crc kubenswrapper[5014]: I0228 04:49:32.408740 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf32d2bd-8642-45d7-ae34-876531251b37" containerName="util" Feb 28 04:49:32 crc kubenswrapper[5014]: E0228 04:49:32.408757 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf32d2bd-8642-45d7-ae34-876531251b37" containerName="pull" Feb 28 04:49:32 crc kubenswrapper[5014]: I0228 04:49:32.408763 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf32d2bd-8642-45d7-ae34-876531251b37" containerName="pull" Feb 28 04:49:32 crc kubenswrapper[5014]: I0228 04:49:32.408958 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf32d2bd-8642-45d7-ae34-876531251b37" containerName="extract" Feb 28 04:49:32 crc kubenswrapper[5014]: I0228 04:49:32.409653 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-dddf4b8c5-khjpf" Feb 28 04:49:32 crc kubenswrapper[5014]: I0228 04:49:32.412327 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-zmgrl" Feb 28 04:49:32 crc kubenswrapper[5014]: I0228 04:49:32.422195 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-dddf4b8c5-khjpf"] Feb 28 04:49:32 crc kubenswrapper[5014]: I0228 04:49:32.541036 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrgcm\" (UniqueName: \"kubernetes.io/projected/d6538cec-6b14-4d19-92b6-e1ada175e8a8-kube-api-access-xrgcm\") pod \"openstack-operator-controller-init-dddf4b8c5-khjpf\" (UID: \"d6538cec-6b14-4d19-92b6-e1ada175e8a8\") " pod="openstack-operators/openstack-operator-controller-init-dddf4b8c5-khjpf" Feb 28 04:49:32 crc kubenswrapper[5014]: I0228 04:49:32.642319 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrgcm\" (UniqueName: \"kubernetes.io/projected/d6538cec-6b14-4d19-92b6-e1ada175e8a8-kube-api-access-xrgcm\") pod \"openstack-operator-controller-init-dddf4b8c5-khjpf\" (UID: \"d6538cec-6b14-4d19-92b6-e1ada175e8a8\") " pod="openstack-operators/openstack-operator-controller-init-dddf4b8c5-khjpf" Feb 28 04:49:32 crc kubenswrapper[5014]: I0228 04:49:32.662832 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrgcm\" (UniqueName: \"kubernetes.io/projected/d6538cec-6b14-4d19-92b6-e1ada175e8a8-kube-api-access-xrgcm\") pod \"openstack-operator-controller-init-dddf4b8c5-khjpf\" (UID: \"d6538cec-6b14-4d19-92b6-e1ada175e8a8\") " pod="openstack-operators/openstack-operator-controller-init-dddf4b8c5-khjpf" Feb 28 04:49:32 crc kubenswrapper[5014]: I0228 04:49:32.727328 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-dddf4b8c5-khjpf" Feb 28 04:49:32 crc kubenswrapper[5014]: I0228 04:49:32.944429 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-dddf4b8c5-khjpf"] Feb 28 04:49:33 crc kubenswrapper[5014]: I0228 04:49:33.027080 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-dddf4b8c5-khjpf" event={"ID":"d6538cec-6b14-4d19-92b6-e1ada175e8a8","Type":"ContainerStarted","Data":"635336eed70c77a1911df53571abc4a0f4edf08d30c2c3d2ced43025fb4f8eb3"} Feb 28 04:49:37 crc kubenswrapper[5014]: I0228 04:49:37.058221 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-dddf4b8c5-khjpf" event={"ID":"d6538cec-6b14-4d19-92b6-e1ada175e8a8","Type":"ContainerStarted","Data":"2381e40e8aeea17dcc6c0f17ca047b44f95ea425b0209b204f89f0f88b3dc983"} Feb 28 04:49:37 crc kubenswrapper[5014]: I0228 04:49:37.058693 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-dddf4b8c5-khjpf" Feb 28 04:49:37 crc kubenswrapper[5014]: I0228 04:49:37.093131 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-dddf4b8c5-khjpf" podStartSLOduration=1.532821824 podStartE2EDuration="5.093112585s" podCreationTimestamp="2026-02-28 04:49:32 +0000 UTC" firstStartedPulling="2026-02-28 04:49:32.958503137 +0000 UTC m=+961.628629047" lastFinishedPulling="2026-02-28 04:49:36.518793898 +0000 UTC m=+965.188919808" observedRunningTime="2026-02-28 04:49:37.090904436 +0000 UTC m=+965.761030346" watchObservedRunningTime="2026-02-28 04:49:37.093112585 +0000 UTC m=+965.763238515" Feb 28 04:49:42 crc kubenswrapper[5014]: I0228 04:49:42.729939 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-dddf4b8c5-khjpf" Feb 28 04:49:45 crc kubenswrapper[5014]: I0228 04:49:45.706821 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:49:45 crc kubenswrapper[5014]: I0228 04:49:45.707097 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:49:48 crc kubenswrapper[5014]: I0228 04:49:48.188122 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g6z2x"] Feb 28 04:49:48 crc kubenswrapper[5014]: I0228 04:49:48.189571 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6z2x" Feb 28 04:49:48 crc kubenswrapper[5014]: I0228 04:49:48.202624 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g6z2x"] Feb 28 04:49:48 crc kubenswrapper[5014]: I0228 04:49:48.209564 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/872970d0-18d1-4825-add0-22771504e688-catalog-content\") pod \"community-operators-g6z2x\" (UID: \"872970d0-18d1-4825-add0-22771504e688\") " pod="openshift-marketplace/community-operators-g6z2x" Feb 28 04:49:48 crc kubenswrapper[5014]: I0228 04:49:48.209637 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f58h5\" (UniqueName: \"kubernetes.io/projected/872970d0-18d1-4825-add0-22771504e688-kube-api-access-f58h5\") pod \"community-operators-g6z2x\" (UID: \"872970d0-18d1-4825-add0-22771504e688\") " pod="openshift-marketplace/community-operators-g6z2x" Feb 28 04:49:48 crc kubenswrapper[5014]: I0228 04:49:48.209704 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/872970d0-18d1-4825-add0-22771504e688-utilities\") pod \"community-operators-g6z2x\" (UID: \"872970d0-18d1-4825-add0-22771504e688\") " pod="openshift-marketplace/community-operators-g6z2x" Feb 28 04:49:48 crc kubenswrapper[5014]: I0228 04:49:48.311281 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/872970d0-18d1-4825-add0-22771504e688-catalog-content\") pod \"community-operators-g6z2x\" (UID: \"872970d0-18d1-4825-add0-22771504e688\") " pod="openshift-marketplace/community-operators-g6z2x" Feb 28 04:49:48 crc kubenswrapper[5014]: I0228 04:49:48.311351 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f58h5\" (UniqueName: \"kubernetes.io/projected/872970d0-18d1-4825-add0-22771504e688-kube-api-access-f58h5\") pod \"community-operators-g6z2x\" (UID: \"872970d0-18d1-4825-add0-22771504e688\") " pod="openshift-marketplace/community-operators-g6z2x" Feb 28 04:49:48 crc kubenswrapper[5014]: I0228 04:49:48.311381 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/872970d0-18d1-4825-add0-22771504e688-utilities\") pod \"community-operators-g6z2x\" (UID: \"872970d0-18d1-4825-add0-22771504e688\") " pod="openshift-marketplace/community-operators-g6z2x" Feb 28 04:49:48 crc kubenswrapper[5014]: I0228 04:49:48.311843 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/872970d0-18d1-4825-add0-22771504e688-catalog-content\") pod \"community-operators-g6z2x\" (UID: \"872970d0-18d1-4825-add0-22771504e688\") " pod="openshift-marketplace/community-operators-g6z2x" Feb 28 04:49:48 crc kubenswrapper[5014]: I0228 04:49:48.311886 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/872970d0-18d1-4825-add0-22771504e688-utilities\") pod \"community-operators-g6z2x\" (UID: \"872970d0-18d1-4825-add0-22771504e688\") " pod="openshift-marketplace/community-operators-g6z2x" Feb 28 04:49:48 crc kubenswrapper[5014]: I0228 04:49:48.333503 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f58h5\" (UniqueName: \"kubernetes.io/projected/872970d0-18d1-4825-add0-22771504e688-kube-api-access-f58h5\") pod \"community-operators-g6z2x\" (UID: \"872970d0-18d1-4825-add0-22771504e688\") " pod="openshift-marketplace/community-operators-g6z2x" Feb 28 04:49:48 crc kubenswrapper[5014]: I0228 04:49:48.521421 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6z2x" Feb 28 04:49:48 crc kubenswrapper[5014]: I0228 04:49:48.837980 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g6z2x"] Feb 28 04:49:49 crc kubenswrapper[5014]: I0228 04:49:49.139320 5014 generic.go:334] "Generic (PLEG): container finished" podID="872970d0-18d1-4825-add0-22771504e688" containerID="c19643bbf6745f4cbffff1c8029a629afe9b29b096309b8d6ab09c6ca0b72d8c" exitCode=0 Feb 28 04:49:49 crc kubenswrapper[5014]: I0228 04:49:49.139368 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6z2x" event={"ID":"872970d0-18d1-4825-add0-22771504e688","Type":"ContainerDied","Data":"c19643bbf6745f4cbffff1c8029a629afe9b29b096309b8d6ab09c6ca0b72d8c"} Feb 28 04:49:49 crc kubenswrapper[5014]: I0228 04:49:49.139394 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6z2x" event={"ID":"872970d0-18d1-4825-add0-22771504e688","Type":"ContainerStarted","Data":"d32becd65ccfab0a1968322b7cadc238eecec37a90e2ff9cda37ee11e85d25d9"} Feb 28 04:49:51 crc kubenswrapper[5014]: I0228 04:49:51.153121 5014 generic.go:334] "Generic (PLEG): container finished" podID="872970d0-18d1-4825-add0-22771504e688" containerID="1f62eed5cab5858d1a3ca749ee006c8607af9ebfbaa3b51e13663832cb8413ef" exitCode=0 Feb 28 04:49:51 crc kubenswrapper[5014]: I0228 04:49:51.153155 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6z2x" event={"ID":"872970d0-18d1-4825-add0-22771504e688","Type":"ContainerDied","Data":"1f62eed5cab5858d1a3ca749ee006c8607af9ebfbaa3b51e13663832cb8413ef"} Feb 28 04:49:52 crc kubenswrapper[5014]: I0228 04:49:52.160515 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6z2x" event={"ID":"872970d0-18d1-4825-add0-22771504e688","Type":"ContainerStarted","Data":"b35c11145f208c7472d948303985a38b92fbe385c847cec065661549d90b383c"} Feb 28 04:49:52 crc kubenswrapper[5014]: I0228 04:49:52.567752 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5pfs9"] Feb 28 04:49:52 crc kubenswrapper[5014]: I0228 04:49:52.569139 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5pfs9" Feb 28 04:49:52 crc kubenswrapper[5014]: I0228 04:49:52.617191 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5pfs9"] Feb 28 04:49:52 crc kubenswrapper[5014]: I0228 04:49:52.666286 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37b2d3d2-5651-4f0f-ae09-aa0cdb06c359-catalog-content\") pod \"certified-operators-5pfs9\" (UID: \"37b2d3d2-5651-4f0f-ae09-aa0cdb06c359\") " pod="openshift-marketplace/certified-operators-5pfs9" Feb 28 04:49:52 crc kubenswrapper[5014]: I0228 04:49:52.666357 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37b2d3d2-5651-4f0f-ae09-aa0cdb06c359-utilities\") pod \"certified-operators-5pfs9\" (UID: \"37b2d3d2-5651-4f0f-ae09-aa0cdb06c359\") " pod="openshift-marketplace/certified-operators-5pfs9" Feb 28 04:49:52 crc kubenswrapper[5014]: I0228 04:49:52.666445 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rrzt\" (UniqueName: \"kubernetes.io/projected/37b2d3d2-5651-4f0f-ae09-aa0cdb06c359-kube-api-access-6rrzt\") pod \"certified-operators-5pfs9\" (UID: \"37b2d3d2-5651-4f0f-ae09-aa0cdb06c359\") " pod="openshift-marketplace/certified-operators-5pfs9" Feb 28 04:49:52 crc kubenswrapper[5014]: I0228 04:49:52.767198 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37b2d3d2-5651-4f0f-ae09-aa0cdb06c359-catalog-content\") pod \"certified-operators-5pfs9\" (UID: \"37b2d3d2-5651-4f0f-ae09-aa0cdb06c359\") " pod="openshift-marketplace/certified-operators-5pfs9" Feb 28 04:49:52 crc kubenswrapper[5014]: I0228 04:49:52.767258 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37b2d3d2-5651-4f0f-ae09-aa0cdb06c359-utilities\") pod \"certified-operators-5pfs9\" (UID: \"37b2d3d2-5651-4f0f-ae09-aa0cdb06c359\") " pod="openshift-marketplace/certified-operators-5pfs9" Feb 28 04:49:52 crc kubenswrapper[5014]: I0228 04:49:52.767302 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6rrzt\" (UniqueName: \"kubernetes.io/projected/37b2d3d2-5651-4f0f-ae09-aa0cdb06c359-kube-api-access-6rrzt\") pod \"certified-operators-5pfs9\" (UID: \"37b2d3d2-5651-4f0f-ae09-aa0cdb06c359\") " pod="openshift-marketplace/certified-operators-5pfs9" Feb 28 04:49:52 crc kubenswrapper[5014]: I0228 04:49:52.767626 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37b2d3d2-5651-4f0f-ae09-aa0cdb06c359-catalog-content\") pod \"certified-operators-5pfs9\" (UID: \"37b2d3d2-5651-4f0f-ae09-aa0cdb06c359\") " pod="openshift-marketplace/certified-operators-5pfs9" Feb 28 04:49:52 crc kubenswrapper[5014]: I0228 04:49:52.767744 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37b2d3d2-5651-4f0f-ae09-aa0cdb06c359-utilities\") pod \"certified-operators-5pfs9\" (UID: \"37b2d3d2-5651-4f0f-ae09-aa0cdb06c359\") " pod="openshift-marketplace/certified-operators-5pfs9" Feb 28 04:49:52 crc kubenswrapper[5014]: I0228 04:49:52.799789 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6rrzt\" (UniqueName: \"kubernetes.io/projected/37b2d3d2-5651-4f0f-ae09-aa0cdb06c359-kube-api-access-6rrzt\") pod \"certified-operators-5pfs9\" (UID: \"37b2d3d2-5651-4f0f-ae09-aa0cdb06c359\") " pod="openshift-marketplace/certified-operators-5pfs9" Feb 28 04:49:52 crc kubenswrapper[5014]: I0228 04:49:52.886046 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5pfs9" Feb 28 04:49:53 crc kubenswrapper[5014]: I0228 04:49:53.185018 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g6z2x" podStartSLOduration=2.413371012 podStartE2EDuration="5.184996993s" podCreationTimestamp="2026-02-28 04:49:48 +0000 UTC" firstStartedPulling="2026-02-28 04:49:49.140548619 +0000 UTC m=+977.810674529" lastFinishedPulling="2026-02-28 04:49:51.9121746 +0000 UTC m=+980.582300510" observedRunningTime="2026-02-28 04:49:53.184764077 +0000 UTC m=+981.854889987" watchObservedRunningTime="2026-02-28 04:49:53.184996993 +0000 UTC m=+981.855122903" Feb 28 04:49:53 crc kubenswrapper[5014]: I0228 04:49:53.310894 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5pfs9"] Feb 28 04:49:54 crc kubenswrapper[5014]: I0228 04:49:54.176824 5014 generic.go:334] "Generic (PLEG): container finished" podID="37b2d3d2-5651-4f0f-ae09-aa0cdb06c359" containerID="7ae702e5ba762d5227ca532218dccbcbd7924980d8ed35a31a331a772b661805" exitCode=0 Feb 28 04:49:54 crc kubenswrapper[5014]: I0228 04:49:54.181906 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pfs9" event={"ID":"37b2d3d2-5651-4f0f-ae09-aa0cdb06c359","Type":"ContainerDied","Data":"7ae702e5ba762d5227ca532218dccbcbd7924980d8ed35a31a331a772b661805"} Feb 28 04:49:54 crc kubenswrapper[5014]: I0228 04:49:54.181957 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pfs9" event={"ID":"37b2d3d2-5651-4f0f-ae09-aa0cdb06c359","Type":"ContainerStarted","Data":"0c792f316794d5b9b7a35597c6091c37ffc3b551a86eef7ba0c176c245ad5ff3"} Feb 28 04:49:56 crc kubenswrapper[5014]: I0228 04:49:56.198387 5014 generic.go:334] "Generic (PLEG): container finished" podID="37b2d3d2-5651-4f0f-ae09-aa0cdb06c359" containerID="43f514ebfb4520479238e80d89089e3aa90c85aeac03a86e8cc5c9213a60b795" exitCode=0 Feb 28 04:49:56 crc kubenswrapper[5014]: I0228 04:49:56.198690 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pfs9" event={"ID":"37b2d3d2-5651-4f0f-ae09-aa0cdb06c359","Type":"ContainerDied","Data":"43f514ebfb4520479238e80d89089e3aa90c85aeac03a86e8cc5c9213a60b795"} Feb 28 04:49:57 crc kubenswrapper[5014]: I0228 04:49:57.989020 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zqbz9"] Feb 28 04:49:57 crc kubenswrapper[5014]: I0228 04:49:57.992434 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zqbz9" Feb 28 04:49:58 crc kubenswrapper[5014]: I0228 04:49:58.003139 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqbz9"] Feb 28 04:49:58 crc kubenswrapper[5014]: I0228 04:49:58.151690 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2707ec-866d-45fd-8983-c70ac8018def-utilities\") pod \"redhat-marketplace-zqbz9\" (UID: \"1f2707ec-866d-45fd-8983-c70ac8018def\") " pod="openshift-marketplace/redhat-marketplace-zqbz9" Feb 28 04:49:58 crc kubenswrapper[5014]: I0228 04:49:58.151744 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82pc6\" (UniqueName: \"kubernetes.io/projected/1f2707ec-866d-45fd-8983-c70ac8018def-kube-api-access-82pc6\") pod \"redhat-marketplace-zqbz9\" (UID: \"1f2707ec-866d-45fd-8983-c70ac8018def\") " pod="openshift-marketplace/redhat-marketplace-zqbz9" Feb 28 04:49:58 crc kubenswrapper[5014]: I0228 04:49:58.151776 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2707ec-866d-45fd-8983-c70ac8018def-catalog-content\") pod \"redhat-marketplace-zqbz9\" (UID: \"1f2707ec-866d-45fd-8983-c70ac8018def\") " pod="openshift-marketplace/redhat-marketplace-zqbz9" Feb 28 04:49:58 crc kubenswrapper[5014]: I0228 04:49:58.213534 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pfs9" event={"ID":"37b2d3d2-5651-4f0f-ae09-aa0cdb06c359","Type":"ContainerStarted","Data":"51b0726d71bf64f3d146cdd9736275302f868c9ee168bf34dcd2809430b64fd2"} Feb 28 04:49:58 crc kubenswrapper[5014]: I0228 04:49:58.232184 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5pfs9" podStartSLOduration=2.773165714 podStartE2EDuration="6.232163173s" podCreationTimestamp="2026-02-28 04:49:52 +0000 UTC" firstStartedPulling="2026-02-28 04:49:54.179734553 +0000 UTC m=+982.849860463" lastFinishedPulling="2026-02-28 04:49:57.638732002 +0000 UTC m=+986.308857922" observedRunningTime="2026-02-28 04:49:58.23055809 +0000 UTC m=+986.900684020" watchObservedRunningTime="2026-02-28 04:49:58.232163173 +0000 UTC m=+986.902289103" Feb 28 04:49:58 crc kubenswrapper[5014]: I0228 04:49:58.253246 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2707ec-866d-45fd-8983-c70ac8018def-utilities\") pod \"redhat-marketplace-zqbz9\" (UID: \"1f2707ec-866d-45fd-8983-c70ac8018def\") " pod="openshift-marketplace/redhat-marketplace-zqbz9" Feb 28 04:49:58 crc kubenswrapper[5014]: I0228 04:49:58.253324 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82pc6\" (UniqueName: \"kubernetes.io/projected/1f2707ec-866d-45fd-8983-c70ac8018def-kube-api-access-82pc6\") pod \"redhat-marketplace-zqbz9\" (UID: \"1f2707ec-866d-45fd-8983-c70ac8018def\") " pod="openshift-marketplace/redhat-marketplace-zqbz9" Feb 28 04:49:58 crc kubenswrapper[5014]: I0228 04:49:58.253389 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2707ec-866d-45fd-8983-c70ac8018def-catalog-content\") pod \"redhat-marketplace-zqbz9\" (UID: \"1f2707ec-866d-45fd-8983-c70ac8018def\") " pod="openshift-marketplace/redhat-marketplace-zqbz9" Feb 28 04:49:58 crc kubenswrapper[5014]: I0228 04:49:58.254123 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2707ec-866d-45fd-8983-c70ac8018def-catalog-content\") pod \"redhat-marketplace-zqbz9\" (UID: \"1f2707ec-866d-45fd-8983-c70ac8018def\") " pod="openshift-marketplace/redhat-marketplace-zqbz9" Feb 28 04:49:58 crc kubenswrapper[5014]: I0228 04:49:58.254490 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2707ec-866d-45fd-8983-c70ac8018def-utilities\") pod \"redhat-marketplace-zqbz9\" (UID: \"1f2707ec-866d-45fd-8983-c70ac8018def\") " pod="openshift-marketplace/redhat-marketplace-zqbz9" Feb 28 04:49:58 crc kubenswrapper[5014]: I0228 04:49:58.287495 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82pc6\" (UniqueName: \"kubernetes.io/projected/1f2707ec-866d-45fd-8983-c70ac8018def-kube-api-access-82pc6\") pod \"redhat-marketplace-zqbz9\" (UID: \"1f2707ec-866d-45fd-8983-c70ac8018def\") " pod="openshift-marketplace/redhat-marketplace-zqbz9" Feb 28 04:49:58 crc kubenswrapper[5014]: I0228 04:49:58.322329 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zqbz9" Feb 28 04:49:58 crc kubenswrapper[5014]: I0228 04:49:58.522184 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g6z2x" Feb 28 04:49:58 crc kubenswrapper[5014]: I0228 04:49:58.522328 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g6z2x" Feb 28 04:49:58 crc kubenswrapper[5014]: I0228 04:49:58.570306 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g6z2x" Feb 28 04:49:58 crc kubenswrapper[5014]: I0228 04:49:58.747222 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqbz9"] Feb 28 04:49:58 crc kubenswrapper[5014]: W0228 04:49:58.751083 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1f2707ec_866d_45fd_8983_c70ac8018def.slice/crio-ba93f01ade138274bc8fb611d8c88ebde0ccf1d50f1ac39cc96c723e66f9bac7 WatchSource:0}: Error finding container ba93f01ade138274bc8fb611d8c88ebde0ccf1d50f1ac39cc96c723e66f9bac7: Status 404 returned error can't find the container with id ba93f01ade138274bc8fb611d8c88ebde0ccf1d50f1ac39cc96c723e66f9bac7 Feb 28 04:49:59 crc kubenswrapper[5014]: I0228 04:49:59.222402 5014 generic.go:334] "Generic (PLEG): container finished" podID="1f2707ec-866d-45fd-8983-c70ac8018def" containerID="ee924a9b4b56846f0c28dca62b684183277dafd5e1b4c9437995c7e435bdb0de" exitCode=0 Feb 28 04:49:59 crc kubenswrapper[5014]: I0228 04:49:59.222483 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqbz9" event={"ID":"1f2707ec-866d-45fd-8983-c70ac8018def","Type":"ContainerDied","Data":"ee924a9b4b56846f0c28dca62b684183277dafd5e1b4c9437995c7e435bdb0de"} Feb 28 04:49:59 crc kubenswrapper[5014]: I0228 04:49:59.222576 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqbz9" event={"ID":"1f2707ec-866d-45fd-8983-c70ac8018def","Type":"ContainerStarted","Data":"ba93f01ade138274bc8fb611d8c88ebde0ccf1d50f1ac39cc96c723e66f9bac7"} Feb 28 04:49:59 crc kubenswrapper[5014]: I0228 04:49:59.272356 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g6z2x" Feb 28 04:50:00 crc kubenswrapper[5014]: I0228 04:50:00.127764 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537570-cjqpt"] Feb 28 04:50:00 crc kubenswrapper[5014]: I0228 04:50:00.128725 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537570-cjqpt" Feb 28 04:50:00 crc kubenswrapper[5014]: I0228 04:50:00.130942 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:50:00 crc kubenswrapper[5014]: I0228 04:50:00.131161 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:50:00 crc kubenswrapper[5014]: I0228 04:50:00.131336 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 04:50:00 crc kubenswrapper[5014]: I0228 04:50:00.141768 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537570-cjqpt"] Feb 28 04:50:00 crc kubenswrapper[5014]: I0228 04:50:00.280715 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rccdk\" (UniqueName: \"kubernetes.io/projected/d816e108-724b-47c0-a6a2-6499c9c56252-kube-api-access-rccdk\") pod \"auto-csr-approver-29537570-cjqpt\" (UID: \"d816e108-724b-47c0-a6a2-6499c9c56252\") " pod="openshift-infra/auto-csr-approver-29537570-cjqpt" Feb 28 04:50:00 crc kubenswrapper[5014]: I0228 04:50:00.382540 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rccdk\" (UniqueName: \"kubernetes.io/projected/d816e108-724b-47c0-a6a2-6499c9c56252-kube-api-access-rccdk\") pod \"auto-csr-approver-29537570-cjqpt\" (UID: \"d816e108-724b-47c0-a6a2-6499c9c56252\") " pod="openshift-infra/auto-csr-approver-29537570-cjqpt" Feb 28 04:50:00 crc kubenswrapper[5014]: I0228 04:50:00.405555 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rccdk\" (UniqueName: \"kubernetes.io/projected/d816e108-724b-47c0-a6a2-6499c9c56252-kube-api-access-rccdk\") pod \"auto-csr-approver-29537570-cjqpt\" (UID: \"d816e108-724b-47c0-a6a2-6499c9c56252\") " pod="openshift-infra/auto-csr-approver-29537570-cjqpt" Feb 28 04:50:00 crc kubenswrapper[5014]: I0228 04:50:00.445972 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537570-cjqpt" Feb 28 04:50:00 crc kubenswrapper[5014]: I0228 04:50:00.902974 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537570-cjqpt"] Feb 28 04:50:01 crc kubenswrapper[5014]: I0228 04:50:01.246913 5014 generic.go:334] "Generic (PLEG): container finished" podID="1f2707ec-866d-45fd-8983-c70ac8018def" containerID="aa747a0f3da7972e5238d9c6d072692243b59dec6a903fcfc9c03248e8a6f7cd" exitCode=0 Feb 28 04:50:01 crc kubenswrapper[5014]: I0228 04:50:01.247056 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqbz9" event={"ID":"1f2707ec-866d-45fd-8983-c70ac8018def","Type":"ContainerDied","Data":"aa747a0f3da7972e5238d9c6d072692243b59dec6a903fcfc9c03248e8a6f7cd"} Feb 28 04:50:01 crc kubenswrapper[5014]: I0228 04:50:01.250290 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537570-cjqpt" event={"ID":"d816e108-724b-47c0-a6a2-6499c9c56252","Type":"ContainerStarted","Data":"07fda0d29723dfb011d389a7f2cc79db06f34e5888ff4932c930292cb6d68852"} Feb 28 04:50:01 crc kubenswrapper[5014]: I0228 04:50:01.768559 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g6z2x"] Feb 28 04:50:02 crc kubenswrapper[5014]: I0228 04:50:02.257385 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g6z2x" podUID="872970d0-18d1-4825-add0-22771504e688" containerName="registry-server" containerID="cri-o://b35c11145f208c7472d948303985a38b92fbe385c847cec065661549d90b383c" gracePeriod=2 Feb 28 04:50:02 crc kubenswrapper[5014]: I0228 04:50:02.886484 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5pfs9" Feb 28 04:50:02 crc kubenswrapper[5014]: I0228 04:50:02.886846 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5pfs9" Feb 28 04:50:02 crc kubenswrapper[5014]: I0228 04:50:02.942645 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5pfs9" Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.179189 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6z2x" Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.266130 5014 generic.go:334] "Generic (PLEG): container finished" podID="872970d0-18d1-4825-add0-22771504e688" containerID="b35c11145f208c7472d948303985a38b92fbe385c847cec065661549d90b383c" exitCode=0 Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.266229 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6z2x" event={"ID":"872970d0-18d1-4825-add0-22771504e688","Type":"ContainerDied","Data":"b35c11145f208c7472d948303985a38b92fbe385c847cec065661549d90b383c"} Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.266277 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g6z2x" event={"ID":"872970d0-18d1-4825-add0-22771504e688","Type":"ContainerDied","Data":"d32becd65ccfab0a1968322b7cadc238eecec37a90e2ff9cda37ee11e85d25d9"} Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.266297 5014 scope.go:117] "RemoveContainer" containerID="b35c11145f208c7472d948303985a38b92fbe385c847cec065661549d90b383c" Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.267122 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g6z2x" Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.268879 5014 generic.go:334] "Generic (PLEG): container finished" podID="d816e108-724b-47c0-a6a2-6499c9c56252" containerID="36e86e4f808ab2a90ca07bb71d852d074d15aad41b7d840a859b88051549d83b" exitCode=0 Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.268933 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537570-cjqpt" event={"ID":"d816e108-724b-47c0-a6a2-6499c9c56252","Type":"ContainerDied","Data":"36e86e4f808ab2a90ca07bb71d852d074d15aad41b7d840a859b88051549d83b"} Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.271503 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqbz9" event={"ID":"1f2707ec-866d-45fd-8983-c70ac8018def","Type":"ContainerStarted","Data":"bf825b2de63e7eda0e1f30484d43a9720c138e77f42eb8073425884195a7be52"} Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.286374 5014 scope.go:117] "RemoveContainer" containerID="1f62eed5cab5858d1a3ca749ee006c8607af9ebfbaa3b51e13663832cb8413ef" Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.311117 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zqbz9" podStartSLOduration=3.718282371 podStartE2EDuration="6.311099489s" podCreationTimestamp="2026-02-28 04:49:57 +0000 UTC" firstStartedPulling="2026-02-28 04:49:59.224093826 +0000 UTC m=+987.894219746" lastFinishedPulling="2026-02-28 04:50:01.816910904 +0000 UTC m=+990.487036864" observedRunningTime="2026-02-28 04:50:03.30815707 +0000 UTC m=+991.978282980" watchObservedRunningTime="2026-02-28 04:50:03.311099489 +0000 UTC m=+991.981225399" Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.311969 5014 scope.go:117] "RemoveContainer" containerID="c19643bbf6745f4cbffff1c8029a629afe9b29b096309b8d6ab09c6ca0b72d8c" Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.319446 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5pfs9" Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.327205 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f58h5\" (UniqueName: \"kubernetes.io/projected/872970d0-18d1-4825-add0-22771504e688-kube-api-access-f58h5\") pod \"872970d0-18d1-4825-add0-22771504e688\" (UID: \"872970d0-18d1-4825-add0-22771504e688\") " Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.327276 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/872970d0-18d1-4825-add0-22771504e688-utilities\") pod \"872970d0-18d1-4825-add0-22771504e688\" (UID: \"872970d0-18d1-4825-add0-22771504e688\") " Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.327314 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/872970d0-18d1-4825-add0-22771504e688-catalog-content\") pod \"872970d0-18d1-4825-add0-22771504e688\" (UID: \"872970d0-18d1-4825-add0-22771504e688\") " Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.329828 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/872970d0-18d1-4825-add0-22771504e688-utilities" (OuterVolumeSpecName: "utilities") pod "872970d0-18d1-4825-add0-22771504e688" (UID: "872970d0-18d1-4825-add0-22771504e688"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.334172 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/872970d0-18d1-4825-add0-22771504e688-kube-api-access-f58h5" (OuterVolumeSpecName: "kube-api-access-f58h5") pod "872970d0-18d1-4825-add0-22771504e688" (UID: "872970d0-18d1-4825-add0-22771504e688"). InnerVolumeSpecName "kube-api-access-f58h5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.340711 5014 scope.go:117] "RemoveContainer" containerID="b35c11145f208c7472d948303985a38b92fbe385c847cec065661549d90b383c" Feb 28 04:50:03 crc kubenswrapper[5014]: E0228 04:50:03.341653 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b35c11145f208c7472d948303985a38b92fbe385c847cec065661549d90b383c\": container with ID starting with b35c11145f208c7472d948303985a38b92fbe385c847cec065661549d90b383c not found: ID does not exist" containerID="b35c11145f208c7472d948303985a38b92fbe385c847cec065661549d90b383c" Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.341708 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b35c11145f208c7472d948303985a38b92fbe385c847cec065661549d90b383c"} err="failed to get container status \"b35c11145f208c7472d948303985a38b92fbe385c847cec065661549d90b383c\": rpc error: code = NotFound desc = could not find container \"b35c11145f208c7472d948303985a38b92fbe385c847cec065661549d90b383c\": container with ID starting with b35c11145f208c7472d948303985a38b92fbe385c847cec065661549d90b383c not found: ID does not exist" Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.341742 5014 scope.go:117] "RemoveContainer" containerID="1f62eed5cab5858d1a3ca749ee006c8607af9ebfbaa3b51e13663832cb8413ef" Feb 28 04:50:03 crc kubenswrapper[5014]: E0228 04:50:03.343217 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f62eed5cab5858d1a3ca749ee006c8607af9ebfbaa3b51e13663832cb8413ef\": container with ID starting with 1f62eed5cab5858d1a3ca749ee006c8607af9ebfbaa3b51e13663832cb8413ef not found: ID does not exist" containerID="1f62eed5cab5858d1a3ca749ee006c8607af9ebfbaa3b51e13663832cb8413ef" Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.343241 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f62eed5cab5858d1a3ca749ee006c8607af9ebfbaa3b51e13663832cb8413ef"} err="failed to get container status \"1f62eed5cab5858d1a3ca749ee006c8607af9ebfbaa3b51e13663832cb8413ef\": rpc error: code = NotFound desc = could not find container \"1f62eed5cab5858d1a3ca749ee006c8607af9ebfbaa3b51e13663832cb8413ef\": container with ID starting with 1f62eed5cab5858d1a3ca749ee006c8607af9ebfbaa3b51e13663832cb8413ef not found: ID does not exist" Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.343254 5014 scope.go:117] "RemoveContainer" containerID="c19643bbf6745f4cbffff1c8029a629afe9b29b096309b8d6ab09c6ca0b72d8c" Feb 28 04:50:03 crc kubenswrapper[5014]: E0228 04:50:03.343637 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c19643bbf6745f4cbffff1c8029a629afe9b29b096309b8d6ab09c6ca0b72d8c\": container with ID starting with c19643bbf6745f4cbffff1c8029a629afe9b29b096309b8d6ab09c6ca0b72d8c not found: ID does not exist" containerID="c19643bbf6745f4cbffff1c8029a629afe9b29b096309b8d6ab09c6ca0b72d8c" Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.343659 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c19643bbf6745f4cbffff1c8029a629afe9b29b096309b8d6ab09c6ca0b72d8c"} err="failed to get container status \"c19643bbf6745f4cbffff1c8029a629afe9b29b096309b8d6ab09c6ca0b72d8c\": rpc error: code = NotFound desc = could not find container \"c19643bbf6745f4cbffff1c8029a629afe9b29b096309b8d6ab09c6ca0b72d8c\": container with ID starting with c19643bbf6745f4cbffff1c8029a629afe9b29b096309b8d6ab09c6ca0b72d8c not found: ID does not exist" Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.389854 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/872970d0-18d1-4825-add0-22771504e688-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "872970d0-18d1-4825-add0-22771504e688" (UID: "872970d0-18d1-4825-add0-22771504e688"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.428420 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f58h5\" (UniqueName: \"kubernetes.io/projected/872970d0-18d1-4825-add0-22771504e688-kube-api-access-f58h5\") on node \"crc\" DevicePath \"\"" Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.428454 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/872970d0-18d1-4825-add0-22771504e688-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.428468 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/872970d0-18d1-4825-add0-22771504e688-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.598445 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g6z2x"] Feb 28 04:50:03 crc kubenswrapper[5014]: I0228 04:50:03.605485 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g6z2x"] Feb 28 04:50:04 crc kubenswrapper[5014]: I0228 04:50:04.180319 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="872970d0-18d1-4825-add0-22771504e688" path="/var/lib/kubelet/pods/872970d0-18d1-4825-add0-22771504e688/volumes" Feb 28 04:50:04 crc kubenswrapper[5014]: I0228 04:50:04.539089 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537570-cjqpt" Feb 28 04:50:04 crc kubenswrapper[5014]: I0228 04:50:04.641908 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rccdk\" (UniqueName: \"kubernetes.io/projected/d816e108-724b-47c0-a6a2-6499c9c56252-kube-api-access-rccdk\") pod \"d816e108-724b-47c0-a6a2-6499c9c56252\" (UID: \"d816e108-724b-47c0-a6a2-6499c9c56252\") " Feb 28 04:50:04 crc kubenswrapper[5014]: I0228 04:50:04.647205 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d816e108-724b-47c0-a6a2-6499c9c56252-kube-api-access-rccdk" (OuterVolumeSpecName: "kube-api-access-rccdk") pod "d816e108-724b-47c0-a6a2-6499c9c56252" (UID: "d816e108-724b-47c0-a6a2-6499c9c56252"). InnerVolumeSpecName "kube-api-access-rccdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:50:04 crc kubenswrapper[5014]: I0228 04:50:04.744055 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rccdk\" (UniqueName: \"kubernetes.io/projected/d816e108-724b-47c0-a6a2-6499c9c56252-kube-api-access-rccdk\") on node \"crc\" DevicePath \"\"" Feb 28 04:50:05 crc kubenswrapper[5014]: I0228 04:50:05.285585 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537570-cjqpt" event={"ID":"d816e108-724b-47c0-a6a2-6499c9c56252","Type":"ContainerDied","Data":"07fda0d29723dfb011d389a7f2cc79db06f34e5888ff4932c930292cb6d68852"} Feb 28 04:50:05 crc kubenswrapper[5014]: I0228 04:50:05.285626 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07fda0d29723dfb011d389a7f2cc79db06f34e5888ff4932c930292cb6d68852" Feb 28 04:50:05 crc kubenswrapper[5014]: I0228 04:50:05.285672 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537570-cjqpt" Feb 28 04:50:05 crc kubenswrapper[5014]: I0228 04:50:05.600677 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537564-j9zpx"] Feb 28 04:50:05 crc kubenswrapper[5014]: I0228 04:50:05.611706 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537564-j9zpx"] Feb 28 04:50:06 crc kubenswrapper[5014]: I0228 04:50:06.169139 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5pfs9"] Feb 28 04:50:06 crc kubenswrapper[5014]: I0228 04:50:06.169508 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5pfs9" podUID="37b2d3d2-5651-4f0f-ae09-aa0cdb06c359" containerName="registry-server" containerID="cri-o://51b0726d71bf64f3d146cdd9736275302f868c9ee168bf34dcd2809430b64fd2" gracePeriod=2 Feb 28 04:50:06 crc kubenswrapper[5014]: I0228 04:50:06.185485 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dd4f84a-9c92-49e2-8887-603c7560f417" path="/var/lib/kubelet/pods/1dd4f84a-9c92-49e2-8887-603c7560f417/volumes" Feb 28 04:50:08 crc kubenswrapper[5014]: I0228 04:50:08.322879 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zqbz9" Feb 28 04:50:08 crc kubenswrapper[5014]: I0228 04:50:08.323292 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zqbz9" Feb 28 04:50:08 crc kubenswrapper[5014]: I0228 04:50:08.387541 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zqbz9" Feb 28 04:50:08 crc kubenswrapper[5014]: I0228 04:50:08.969283 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5pfs9" Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.111876 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rrzt\" (UniqueName: \"kubernetes.io/projected/37b2d3d2-5651-4f0f-ae09-aa0cdb06c359-kube-api-access-6rrzt\") pod \"37b2d3d2-5651-4f0f-ae09-aa0cdb06c359\" (UID: \"37b2d3d2-5651-4f0f-ae09-aa0cdb06c359\") " Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.111939 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37b2d3d2-5651-4f0f-ae09-aa0cdb06c359-utilities\") pod \"37b2d3d2-5651-4f0f-ae09-aa0cdb06c359\" (UID: \"37b2d3d2-5651-4f0f-ae09-aa0cdb06c359\") " Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.112131 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37b2d3d2-5651-4f0f-ae09-aa0cdb06c359-catalog-content\") pod \"37b2d3d2-5651-4f0f-ae09-aa0cdb06c359\" (UID: \"37b2d3d2-5651-4f0f-ae09-aa0cdb06c359\") " Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.121526 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37b2d3d2-5651-4f0f-ae09-aa0cdb06c359-utilities" (OuterVolumeSpecName: "utilities") pod "37b2d3d2-5651-4f0f-ae09-aa0cdb06c359" (UID: "37b2d3d2-5651-4f0f-ae09-aa0cdb06c359"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.131159 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37b2d3d2-5651-4f0f-ae09-aa0cdb06c359-kube-api-access-6rrzt" (OuterVolumeSpecName: "kube-api-access-6rrzt") pod "37b2d3d2-5651-4f0f-ae09-aa0cdb06c359" (UID: "37b2d3d2-5651-4f0f-ae09-aa0cdb06c359"). InnerVolumeSpecName "kube-api-access-6rrzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.172785 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37b2d3d2-5651-4f0f-ae09-aa0cdb06c359-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37b2d3d2-5651-4f0f-ae09-aa0cdb06c359" (UID: "37b2d3d2-5651-4f0f-ae09-aa0cdb06c359"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.213659 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37b2d3d2-5651-4f0f-ae09-aa0cdb06c359-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.213691 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6rrzt\" (UniqueName: \"kubernetes.io/projected/37b2d3d2-5651-4f0f-ae09-aa0cdb06c359-kube-api-access-6rrzt\") on node \"crc\" DevicePath \"\"" Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.213703 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37b2d3d2-5651-4f0f-ae09-aa0cdb06c359-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.315894 5014 generic.go:334] "Generic (PLEG): container finished" podID="37b2d3d2-5651-4f0f-ae09-aa0cdb06c359" containerID="51b0726d71bf64f3d146cdd9736275302f868c9ee168bf34dcd2809430b64fd2" exitCode=0 Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.315971 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5pfs9" Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.315967 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pfs9" event={"ID":"37b2d3d2-5651-4f0f-ae09-aa0cdb06c359","Type":"ContainerDied","Data":"51b0726d71bf64f3d146cdd9736275302f868c9ee168bf34dcd2809430b64fd2"} Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.316025 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5pfs9" event={"ID":"37b2d3d2-5651-4f0f-ae09-aa0cdb06c359","Type":"ContainerDied","Data":"0c792f316794d5b9b7a35597c6091c37ffc3b551a86eef7ba0c176c245ad5ff3"} Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.316047 5014 scope.go:117] "RemoveContainer" containerID="51b0726d71bf64f3d146cdd9736275302f868c9ee168bf34dcd2809430b64fd2" Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.340045 5014 scope.go:117] "RemoveContainer" containerID="43f514ebfb4520479238e80d89089e3aa90c85aeac03a86e8cc5c9213a60b795" Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.355800 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5pfs9"] Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.359628 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zqbz9" Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.361908 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5pfs9"] Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.367214 5014 scope.go:117] "RemoveContainer" containerID="7ae702e5ba762d5227ca532218dccbcbd7924980d8ed35a31a331a772b661805" Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.393291 5014 scope.go:117] "RemoveContainer" containerID="51b0726d71bf64f3d146cdd9736275302f868c9ee168bf34dcd2809430b64fd2" Feb 28 04:50:09 crc kubenswrapper[5014]: E0228 04:50:09.393763 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51b0726d71bf64f3d146cdd9736275302f868c9ee168bf34dcd2809430b64fd2\": container with ID starting with 51b0726d71bf64f3d146cdd9736275302f868c9ee168bf34dcd2809430b64fd2 not found: ID does not exist" containerID="51b0726d71bf64f3d146cdd9736275302f868c9ee168bf34dcd2809430b64fd2" Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.393883 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51b0726d71bf64f3d146cdd9736275302f868c9ee168bf34dcd2809430b64fd2"} err="failed to get container status \"51b0726d71bf64f3d146cdd9736275302f868c9ee168bf34dcd2809430b64fd2\": rpc error: code = NotFound desc = could not find container \"51b0726d71bf64f3d146cdd9736275302f868c9ee168bf34dcd2809430b64fd2\": container with ID starting with 51b0726d71bf64f3d146cdd9736275302f868c9ee168bf34dcd2809430b64fd2 not found: ID does not exist" Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.393913 5014 scope.go:117] "RemoveContainer" containerID="43f514ebfb4520479238e80d89089e3aa90c85aeac03a86e8cc5c9213a60b795" Feb 28 04:50:09 crc kubenswrapper[5014]: E0228 04:50:09.394291 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43f514ebfb4520479238e80d89089e3aa90c85aeac03a86e8cc5c9213a60b795\": container with ID starting with 43f514ebfb4520479238e80d89089e3aa90c85aeac03a86e8cc5c9213a60b795 not found: ID does not exist" containerID="43f514ebfb4520479238e80d89089e3aa90c85aeac03a86e8cc5c9213a60b795" Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.394344 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43f514ebfb4520479238e80d89089e3aa90c85aeac03a86e8cc5c9213a60b795"} err="failed to get container status \"43f514ebfb4520479238e80d89089e3aa90c85aeac03a86e8cc5c9213a60b795\": rpc error: code = NotFound desc = could not find container \"43f514ebfb4520479238e80d89089e3aa90c85aeac03a86e8cc5c9213a60b795\": container with ID starting with 43f514ebfb4520479238e80d89089e3aa90c85aeac03a86e8cc5c9213a60b795 not found: ID does not exist" Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.394361 5014 scope.go:117] "RemoveContainer" containerID="7ae702e5ba762d5227ca532218dccbcbd7924980d8ed35a31a331a772b661805" Feb 28 04:50:09 crc kubenswrapper[5014]: E0228 04:50:09.394621 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ae702e5ba762d5227ca532218dccbcbd7924980d8ed35a31a331a772b661805\": container with ID starting with 7ae702e5ba762d5227ca532218dccbcbd7924980d8ed35a31a331a772b661805 not found: ID does not exist" containerID="7ae702e5ba762d5227ca532218dccbcbd7924980d8ed35a31a331a772b661805" Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.394675 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ae702e5ba762d5227ca532218dccbcbd7924980d8ed35a31a331a772b661805"} err="failed to get container status \"7ae702e5ba762d5227ca532218dccbcbd7924980d8ed35a31a331a772b661805\": rpc error: code = NotFound desc = could not find container \"7ae702e5ba762d5227ca532218dccbcbd7924980d8ed35a31a331a772b661805\": container with ID starting with 7ae702e5ba762d5227ca532218dccbcbd7924980d8ed35a31a331a772b661805 not found: ID does not exist" Feb 28 04:50:09 crc kubenswrapper[5014]: I0228 04:50:09.959884 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqbz9"] Feb 28 04:50:10 crc kubenswrapper[5014]: I0228 04:50:10.178445 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37b2d3d2-5651-4f0f-ae09-aa0cdb06c359" path="/var/lib/kubelet/pods/37b2d3d2-5651-4f0f-ae09-aa0cdb06c359/volumes" Feb 28 04:50:11 crc kubenswrapper[5014]: I0228 04:50:11.351669 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zqbz9" podUID="1f2707ec-866d-45fd-8983-c70ac8018def" containerName="registry-server" containerID="cri-o://bf825b2de63e7eda0e1f30484d43a9720c138e77f42eb8073425884195a7be52" gracePeriod=2 Feb 28 04:50:11 crc kubenswrapper[5014]: I0228 04:50:11.759289 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zqbz9" Feb 28 04:50:11 crc kubenswrapper[5014]: I0228 04:50:11.952872 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82pc6\" (UniqueName: \"kubernetes.io/projected/1f2707ec-866d-45fd-8983-c70ac8018def-kube-api-access-82pc6\") pod \"1f2707ec-866d-45fd-8983-c70ac8018def\" (UID: \"1f2707ec-866d-45fd-8983-c70ac8018def\") " Feb 28 04:50:11 crc kubenswrapper[5014]: I0228 04:50:11.952976 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2707ec-866d-45fd-8983-c70ac8018def-catalog-content\") pod \"1f2707ec-866d-45fd-8983-c70ac8018def\" (UID: \"1f2707ec-866d-45fd-8983-c70ac8018def\") " Feb 28 04:50:11 crc kubenswrapper[5014]: I0228 04:50:11.953044 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2707ec-866d-45fd-8983-c70ac8018def-utilities\") pod \"1f2707ec-866d-45fd-8983-c70ac8018def\" (UID: \"1f2707ec-866d-45fd-8983-c70ac8018def\") " Feb 28 04:50:11 crc kubenswrapper[5014]: I0228 04:50:11.953708 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f2707ec-866d-45fd-8983-c70ac8018def-utilities" (OuterVolumeSpecName: "utilities") pod "1f2707ec-866d-45fd-8983-c70ac8018def" (UID: "1f2707ec-866d-45fd-8983-c70ac8018def"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:50:11 crc kubenswrapper[5014]: I0228 04:50:11.961996 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f2707ec-866d-45fd-8983-c70ac8018def-kube-api-access-82pc6" (OuterVolumeSpecName: "kube-api-access-82pc6") pod "1f2707ec-866d-45fd-8983-c70ac8018def" (UID: "1f2707ec-866d-45fd-8983-c70ac8018def"). InnerVolumeSpecName "kube-api-access-82pc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:50:11 crc kubenswrapper[5014]: I0228 04:50:11.976794 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f2707ec-866d-45fd-8983-c70ac8018def-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f2707ec-866d-45fd-8983-c70ac8018def" (UID: "1f2707ec-866d-45fd-8983-c70ac8018def"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:50:12 crc kubenswrapper[5014]: I0228 04:50:12.054588 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f2707ec-866d-45fd-8983-c70ac8018def-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 04:50:12 crc kubenswrapper[5014]: I0228 04:50:12.054618 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f2707ec-866d-45fd-8983-c70ac8018def-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 04:50:12 crc kubenswrapper[5014]: I0228 04:50:12.054628 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82pc6\" (UniqueName: \"kubernetes.io/projected/1f2707ec-866d-45fd-8983-c70ac8018def-kube-api-access-82pc6\") on node \"crc\" DevicePath \"\"" Feb 28 04:50:12 crc kubenswrapper[5014]: I0228 04:50:12.360675 5014 generic.go:334] "Generic (PLEG): container finished" podID="1f2707ec-866d-45fd-8983-c70ac8018def" containerID="bf825b2de63e7eda0e1f30484d43a9720c138e77f42eb8073425884195a7be52" exitCode=0 Feb 28 04:50:12 crc kubenswrapper[5014]: I0228 04:50:12.360725 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqbz9" event={"ID":"1f2707ec-866d-45fd-8983-c70ac8018def","Type":"ContainerDied","Data":"bf825b2de63e7eda0e1f30484d43a9720c138e77f42eb8073425884195a7be52"} Feb 28 04:50:12 crc kubenswrapper[5014]: I0228 04:50:12.360756 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zqbz9" event={"ID":"1f2707ec-866d-45fd-8983-c70ac8018def","Type":"ContainerDied","Data":"ba93f01ade138274bc8fb611d8c88ebde0ccf1d50f1ac39cc96c723e66f9bac7"} Feb 28 04:50:12 crc kubenswrapper[5014]: I0228 04:50:12.360789 5014 scope.go:117] "RemoveContainer" containerID="bf825b2de63e7eda0e1f30484d43a9720c138e77f42eb8073425884195a7be52" Feb 28 04:50:12 crc kubenswrapper[5014]: I0228 04:50:12.360791 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zqbz9" Feb 28 04:50:12 crc kubenswrapper[5014]: I0228 04:50:12.386251 5014 scope.go:117] "RemoveContainer" containerID="aa747a0f3da7972e5238d9c6d072692243b59dec6a903fcfc9c03248e8a6f7cd" Feb 28 04:50:12 crc kubenswrapper[5014]: I0228 04:50:12.398189 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqbz9"] Feb 28 04:50:12 crc kubenswrapper[5014]: I0228 04:50:12.404561 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zqbz9"] Feb 28 04:50:12 crc kubenswrapper[5014]: I0228 04:50:12.436039 5014 scope.go:117] "RemoveContainer" containerID="ee924a9b4b56846f0c28dca62b684183277dafd5e1b4c9437995c7e435bdb0de" Feb 28 04:50:12 crc kubenswrapper[5014]: I0228 04:50:12.459992 5014 scope.go:117] "RemoveContainer" containerID="bf825b2de63e7eda0e1f30484d43a9720c138e77f42eb8073425884195a7be52" Feb 28 04:50:12 crc kubenswrapper[5014]: E0228 04:50:12.460564 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf825b2de63e7eda0e1f30484d43a9720c138e77f42eb8073425884195a7be52\": container with ID starting with bf825b2de63e7eda0e1f30484d43a9720c138e77f42eb8073425884195a7be52 not found: ID does not exist" containerID="bf825b2de63e7eda0e1f30484d43a9720c138e77f42eb8073425884195a7be52" Feb 28 04:50:12 crc kubenswrapper[5014]: I0228 04:50:12.460605 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf825b2de63e7eda0e1f30484d43a9720c138e77f42eb8073425884195a7be52"} err="failed to get container status \"bf825b2de63e7eda0e1f30484d43a9720c138e77f42eb8073425884195a7be52\": rpc error: code = NotFound desc = could not find container \"bf825b2de63e7eda0e1f30484d43a9720c138e77f42eb8073425884195a7be52\": container with ID starting with bf825b2de63e7eda0e1f30484d43a9720c138e77f42eb8073425884195a7be52 not found: ID does not exist" Feb 28 04:50:12 crc kubenswrapper[5014]: I0228 04:50:12.460651 5014 scope.go:117] "RemoveContainer" containerID="aa747a0f3da7972e5238d9c6d072692243b59dec6a903fcfc9c03248e8a6f7cd" Feb 28 04:50:12 crc kubenswrapper[5014]: E0228 04:50:12.461026 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa747a0f3da7972e5238d9c6d072692243b59dec6a903fcfc9c03248e8a6f7cd\": container with ID starting with aa747a0f3da7972e5238d9c6d072692243b59dec6a903fcfc9c03248e8a6f7cd not found: ID does not exist" containerID="aa747a0f3da7972e5238d9c6d072692243b59dec6a903fcfc9c03248e8a6f7cd" Feb 28 04:50:12 crc kubenswrapper[5014]: I0228 04:50:12.461070 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa747a0f3da7972e5238d9c6d072692243b59dec6a903fcfc9c03248e8a6f7cd"} err="failed to get container status \"aa747a0f3da7972e5238d9c6d072692243b59dec6a903fcfc9c03248e8a6f7cd\": rpc error: code = NotFound desc = could not find container \"aa747a0f3da7972e5238d9c6d072692243b59dec6a903fcfc9c03248e8a6f7cd\": container with ID starting with aa747a0f3da7972e5238d9c6d072692243b59dec6a903fcfc9c03248e8a6f7cd not found: ID does not exist" Feb 28 04:50:12 crc kubenswrapper[5014]: I0228 04:50:12.461095 5014 scope.go:117] "RemoveContainer" containerID="ee924a9b4b56846f0c28dca62b684183277dafd5e1b4c9437995c7e435bdb0de" Feb 28 04:50:12 crc kubenswrapper[5014]: E0228 04:50:12.461452 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee924a9b4b56846f0c28dca62b684183277dafd5e1b4c9437995c7e435bdb0de\": container with ID starting with ee924a9b4b56846f0c28dca62b684183277dafd5e1b4c9437995c7e435bdb0de not found: ID does not exist" containerID="ee924a9b4b56846f0c28dca62b684183277dafd5e1b4c9437995c7e435bdb0de" Feb 28 04:50:12 crc kubenswrapper[5014]: I0228 04:50:12.461499 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee924a9b4b56846f0c28dca62b684183277dafd5e1b4c9437995c7e435bdb0de"} err="failed to get container status \"ee924a9b4b56846f0c28dca62b684183277dafd5e1b4c9437995c7e435bdb0de\": rpc error: code = NotFound desc = could not find container \"ee924a9b4b56846f0c28dca62b684183277dafd5e1b4c9437995c7e435bdb0de\": container with ID starting with ee924a9b4b56846f0c28dca62b684183277dafd5e1b4c9437995c7e435bdb0de not found: ID does not exist" Feb 28 04:50:14 crc kubenswrapper[5014]: I0228 04:50:14.185449 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f2707ec-866d-45fd-8983-c70ac8018def" path="/var/lib/kubelet/pods/1f2707ec-866d-45fd-8983-c70ac8018def/volumes" Feb 28 04:50:15 crc kubenswrapper[5014]: I0228 04:50:15.706415 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:50:15 crc kubenswrapper[5014]: I0228 04:50:15.706705 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.624324 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-p2g4k"] Feb 28 04:50:21 crc kubenswrapper[5014]: E0228 04:50:21.625173 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2707ec-866d-45fd-8983-c70ac8018def" containerName="extract-content" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.625190 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2707ec-866d-45fd-8983-c70ac8018def" containerName="extract-content" Feb 28 04:50:21 crc kubenswrapper[5014]: E0228 04:50:21.625209 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b2d3d2-5651-4f0f-ae09-aa0cdb06c359" containerName="registry-server" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.625217 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b2d3d2-5651-4f0f-ae09-aa0cdb06c359" containerName="registry-server" Feb 28 04:50:21 crc kubenswrapper[5014]: E0228 04:50:21.625230 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b2d3d2-5651-4f0f-ae09-aa0cdb06c359" containerName="extract-content" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.625239 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b2d3d2-5651-4f0f-ae09-aa0cdb06c359" containerName="extract-content" Feb 28 04:50:21 crc kubenswrapper[5014]: E0228 04:50:21.625253 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="872970d0-18d1-4825-add0-22771504e688" containerName="extract-utilities" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.625259 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="872970d0-18d1-4825-add0-22771504e688" containerName="extract-utilities" Feb 28 04:50:21 crc kubenswrapper[5014]: E0228 04:50:21.625271 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="872970d0-18d1-4825-add0-22771504e688" containerName="registry-server" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.625279 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="872970d0-18d1-4825-add0-22771504e688" containerName="registry-server" Feb 28 04:50:21 crc kubenswrapper[5014]: E0228 04:50:21.625292 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2707ec-866d-45fd-8983-c70ac8018def" containerName="extract-utilities" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.625300 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2707ec-866d-45fd-8983-c70ac8018def" containerName="extract-utilities" Feb 28 04:50:21 crc kubenswrapper[5014]: E0228 04:50:21.625313 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f2707ec-866d-45fd-8983-c70ac8018def" containerName="registry-server" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.625321 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f2707ec-866d-45fd-8983-c70ac8018def" containerName="registry-server" Feb 28 04:50:21 crc kubenswrapper[5014]: E0228 04:50:21.625335 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="872970d0-18d1-4825-add0-22771504e688" containerName="extract-content" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.625343 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="872970d0-18d1-4825-add0-22771504e688" containerName="extract-content" Feb 28 04:50:21 crc kubenswrapper[5014]: E0228 04:50:21.625359 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d816e108-724b-47c0-a6a2-6499c9c56252" containerName="oc" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.625367 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="d816e108-724b-47c0-a6a2-6499c9c56252" containerName="oc" Feb 28 04:50:21 crc kubenswrapper[5014]: E0228 04:50:21.625382 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b2d3d2-5651-4f0f-ae09-aa0cdb06c359" containerName="extract-utilities" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.625390 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b2d3d2-5651-4f0f-ae09-aa0cdb06c359" containerName="extract-utilities" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.625528 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="d816e108-724b-47c0-a6a2-6499c9c56252" containerName="oc" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.625545 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f2707ec-866d-45fd-8983-c70ac8018def" containerName="registry-server" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.625555 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="872970d0-18d1-4825-add0-22771504e688" containerName="registry-server" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.625565 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="37b2d3d2-5651-4f0f-ae09-aa0cdb06c359" containerName="registry-server" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.626144 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-p2g4k" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.632130 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-n9r5r"] Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.633050 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-vblrs" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.633209 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-n9r5r" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.634525 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-49snp" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.641676 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-p2g4k"] Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.647426 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-n9r5r"] Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.654632 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-587tn"] Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.655603 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-587tn" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.657998 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-b5r6v" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.659247 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-5t42k"] Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.660386 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-5t42k" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.666549 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-587tn"] Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.671766 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-rmp9z" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.672550 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-2srvx"] Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.673354 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-2srvx" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.674577 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-2w45h" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.698500 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7fzl\" (UniqueName: \"kubernetes.io/projected/7dfedb71-1284-4e5c-826d-efb134b34cdb-kube-api-access-j7fzl\") pod \"barbican-operator-controller-manager-6db6876945-p2g4k\" (UID: \"7dfedb71-1284-4e5c-826d-efb134b34cdb\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-p2g4k" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.698557 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cmp5\" (UniqueName: \"kubernetes.io/projected/52707aa4-b40d-4046-a721-e3b31a1f9648-kube-api-access-5cmp5\") pod \"designate-operator-controller-manager-5d87c9d997-587tn\" (UID: \"52707aa4-b40d-4046-a721-e3b31a1f9648\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-587tn" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.698586 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfd9j\" (UniqueName: \"kubernetes.io/projected/385767a3-7908-4f17-9f63-ea25c784c715-kube-api-access-pfd9j\") pod \"cinder-operator-controller-manager-55d77d7b5c-n9r5r\" (UID: \"385767a3-7908-4f17-9f63-ea25c784c715\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-n9r5r" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.698630 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njc4j\" (UniqueName: \"kubernetes.io/projected/f734a97b-b94d-4132-a426-15111b3fc207-kube-api-access-njc4j\") pod \"glance-operator-controller-manager-64db6967f8-5t42k\" (UID: \"f734a97b-b94d-4132-a426-15111b3fc207\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-5t42k" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.703057 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-5t42k"] Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.736345 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-2srvx"] Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.736399 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-ppf6c"] Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.737276 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-ppf6c" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.738926 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-46xzw" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.752057 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-ppf6c"] Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.754287 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-786bd545f6-8hp88"] Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.755018 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-786bd545f6-8hp88" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.758942 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.759144 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-z4rh5" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.786386 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-7bmg5"] Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.787197 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-7bmg5" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.797474 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-6bd5c" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.800617 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fztmb\" (UniqueName: \"kubernetes.io/projected/9fe3aab0-3f3b-4fb3-a5da-2206ba55e813-kube-api-access-fztmb\") pod \"heat-operator-controller-manager-cf99c678f-2srvx\" (UID: \"9fe3aab0-3f3b-4fb3-a5da-2206ba55e813\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-2srvx" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.800661 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9jvz\" (UniqueName: \"kubernetes.io/projected/0535be64-bda6-4b55-9eb1-fe5a86d3cae8-kube-api-access-r9jvz\") pod \"infra-operator-controller-manager-786bd545f6-8hp88\" (UID: \"0535be64-bda6-4b55-9eb1-fe5a86d3cae8\") " pod="openstack-operators/infra-operator-controller-manager-786bd545f6-8hp88" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.800701 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7fzl\" (UniqueName: \"kubernetes.io/projected/7dfedb71-1284-4e5c-826d-efb134b34cdb-kube-api-access-j7fzl\") pod \"barbican-operator-controller-manager-6db6876945-p2g4k\" (UID: \"7dfedb71-1284-4e5c-826d-efb134b34cdb\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-p2g4k" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.800721 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0535be64-bda6-4b55-9eb1-fe5a86d3cae8-cert\") pod \"infra-operator-controller-manager-786bd545f6-8hp88\" (UID: \"0535be64-bda6-4b55-9eb1-fe5a86d3cae8\") " pod="openstack-operators/infra-operator-controller-manager-786bd545f6-8hp88" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.800754 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5cmp5\" (UniqueName: \"kubernetes.io/projected/52707aa4-b40d-4046-a721-e3b31a1f9648-kube-api-access-5cmp5\") pod \"designate-operator-controller-manager-5d87c9d997-587tn\" (UID: \"52707aa4-b40d-4046-a721-e3b31a1f9648\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-587tn" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.800779 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfd9j\" (UniqueName: \"kubernetes.io/projected/385767a3-7908-4f17-9f63-ea25c784c715-kube-api-access-pfd9j\") pod \"cinder-operator-controller-manager-55d77d7b5c-n9r5r\" (UID: \"385767a3-7908-4f17-9f63-ea25c784c715\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-n9r5r" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.800819 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pchw\" (UniqueName: \"kubernetes.io/projected/5b9d913b-e0e8-42f5-8d98-60fd3c219ff8-kube-api-access-4pchw\") pod \"horizon-operator-controller-manager-78bc7f9bd9-ppf6c\" (UID: \"5b9d913b-e0e8-42f5-8d98-60fd3c219ff8\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-ppf6c" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.800841 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njc4j\" (UniqueName: \"kubernetes.io/projected/f734a97b-b94d-4132-a426-15111b3fc207-kube-api-access-njc4j\") pod \"glance-operator-controller-manager-64db6967f8-5t42k\" (UID: \"f734a97b-b94d-4132-a426-15111b3fc207\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-5t42k" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.805041 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-cfb47"] Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.805826 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-cfb47" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.809323 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-j8xsk" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.819236 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-786bd545f6-8hp88"] Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.830557 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njc4j\" (UniqueName: \"kubernetes.io/projected/f734a97b-b94d-4132-a426-15111b3fc207-kube-api-access-njc4j\") pod \"glance-operator-controller-manager-64db6967f8-5t42k\" (UID: \"f734a97b-b94d-4132-a426-15111b3fc207\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-5t42k" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.846513 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5cmp5\" (UniqueName: \"kubernetes.io/projected/52707aa4-b40d-4046-a721-e3b31a1f9648-kube-api-access-5cmp5\") pod \"designate-operator-controller-manager-5d87c9d997-587tn\" (UID: \"52707aa4-b40d-4046-a721-e3b31a1f9648\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-587tn" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.858614 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfd9j\" (UniqueName: \"kubernetes.io/projected/385767a3-7908-4f17-9f63-ea25c784c715-kube-api-access-pfd9j\") pod \"cinder-operator-controller-manager-55d77d7b5c-n9r5r\" (UID: \"385767a3-7908-4f17-9f63-ea25c784c715\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-n9r5r" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.862690 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7fzl\" (UniqueName: \"kubernetes.io/projected/7dfedb71-1284-4e5c-826d-efb134b34cdb-kube-api-access-j7fzl\") pod \"barbican-operator-controller-manager-6db6876945-p2g4k\" (UID: \"7dfedb71-1284-4e5c-826d-efb134b34cdb\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-p2g4k" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.877868 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-cfb47"] Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.886428 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-7bmg5"] Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.914754 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9jvz\" (UniqueName: \"kubernetes.io/projected/0535be64-bda6-4b55-9eb1-fe5a86d3cae8-kube-api-access-r9jvz\") pod \"infra-operator-controller-manager-786bd545f6-8hp88\" (UID: \"0535be64-bda6-4b55-9eb1-fe5a86d3cae8\") " pod="openstack-operators/infra-operator-controller-manager-786bd545f6-8hp88" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.914820 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0535be64-bda6-4b55-9eb1-fe5a86d3cae8-cert\") pod \"infra-operator-controller-manager-786bd545f6-8hp88\" (UID: \"0535be64-bda6-4b55-9eb1-fe5a86d3cae8\") " pod="openstack-operators/infra-operator-controller-manager-786bd545f6-8hp88" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.914850 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5589\" (UniqueName: \"kubernetes.io/projected/dd26043c-48bc-4202-8266-d2590b6530e3-kube-api-access-h5589\") pod \"ironic-operator-controller-manager-545456dc4-7bmg5\" (UID: \"dd26043c-48bc-4202-8266-d2590b6530e3\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-7bmg5" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.915104 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pchw\" (UniqueName: \"kubernetes.io/projected/5b9d913b-e0e8-42f5-8d98-60fd3c219ff8-kube-api-access-4pchw\") pod \"horizon-operator-controller-manager-78bc7f9bd9-ppf6c\" (UID: \"5b9d913b-e0e8-42f5-8d98-60fd3c219ff8\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-ppf6c" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.915149 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws84l\" (UniqueName: \"kubernetes.io/projected/42fc68c6-e92f-4449-9398-518f904c58fb-kube-api-access-ws84l\") pod \"keystone-operator-controller-manager-7c789f89c6-cfb47\" (UID: \"42fc68c6-e92f-4449-9398-518f904c58fb\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-cfb47" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.915186 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fztmb\" (UniqueName: \"kubernetes.io/projected/9fe3aab0-3f3b-4fb3-a5da-2206ba55e813-kube-api-access-fztmb\") pod \"heat-operator-controller-manager-cf99c678f-2srvx\" (UID: \"9fe3aab0-3f3b-4fb3-a5da-2206ba55e813\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-2srvx" Feb 28 04:50:21 crc kubenswrapper[5014]: E0228 04:50:21.915662 5014 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 28 04:50:21 crc kubenswrapper[5014]: E0228 04:50:21.915714 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0535be64-bda6-4b55-9eb1-fe5a86d3cae8-cert podName:0535be64-bda6-4b55-9eb1-fe5a86d3cae8 nodeName:}" failed. No retries permitted until 2026-02-28 04:50:22.41569456 +0000 UTC m=+1011.085820470 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0535be64-bda6-4b55-9eb1-fe5a86d3cae8-cert") pod "infra-operator-controller-manager-786bd545f6-8hp88" (UID: "0535be64-bda6-4b55-9eb1-fe5a86d3cae8") : secret "infra-operator-webhook-server-cert" not found Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.950424 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-gm8rn"] Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.951685 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-gm8rn" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.954741 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-p2g4k" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.958226 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pchw\" (UniqueName: \"kubernetes.io/projected/5b9d913b-e0e8-42f5-8d98-60fd3c219ff8-kube-api-access-4pchw\") pod \"horizon-operator-controller-manager-78bc7f9bd9-ppf6c\" (UID: \"5b9d913b-e0e8-42f5-8d98-60fd3c219ff8\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-ppf6c" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.958940 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-gm8rn"] Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.963926 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-s4j6f"] Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.964765 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-s4j6f" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.969513 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-s4j6f"] Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.970241 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-n9r5r" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.972957 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-xpd29"] Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.973968 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54688575f-xpd29" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.990909 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-rcb7d"] Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.991890 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-xpd29"] Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.991978 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-rcb7d" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.992534 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-tdlz2" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.992668 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-wplbw" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.992772 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-cqbnn" Feb 28 04:50:21 crc kubenswrapper[5014]: I0228 04:50:21.999113 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-587tn" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.011195 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-sfpfd" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.016728 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k2pd\" (UniqueName: \"kubernetes.io/projected/5f8b5a91-a57a-4679-a625-007592105038-kube-api-access-6k2pd\") pod \"mariadb-operator-controller-manager-7b6bfb6475-s4j6f\" (UID: \"5f8b5a91-a57a-4679-a625-007592105038\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-s4j6f" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.016795 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws84l\" (UniqueName: \"kubernetes.io/projected/42fc68c6-e92f-4449-9398-518f904c58fb-kube-api-access-ws84l\") pod \"keystone-operator-controller-manager-7c789f89c6-cfb47\" (UID: \"42fc68c6-e92f-4449-9398-518f904c58fb\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-cfb47" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.016861 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht7d4\" (UniqueName: \"kubernetes.io/projected/5189b3c2-1b93-432b-b1a3-dc579ef2abb6-kube-api-access-ht7d4\") pod \"manila-operator-controller-manager-67d996989d-gm8rn\" (UID: \"5189b3c2-1b93-432b-b1a3-dc579ef2abb6\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-gm8rn" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.016929 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5589\" (UniqueName: \"kubernetes.io/projected/dd26043c-48bc-4202-8266-d2590b6530e3-kube-api-access-h5589\") pod \"ironic-operator-controller-manager-545456dc4-7bmg5\" (UID: \"dd26043c-48bc-4202-8266-d2590b6530e3\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-7bmg5" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.017009 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9wns\" (UniqueName: \"kubernetes.io/projected/f5555801-1739-45d3-946f-3b731b87c593-kube-api-access-z9wns\") pod \"neutron-operator-controller-manager-54688575f-xpd29\" (UID: \"f5555801-1739-45d3-946f-3b731b87c593\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-xpd29" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.017519 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-rcb7d"] Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.026614 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9jvz\" (UniqueName: \"kubernetes.io/projected/0535be64-bda6-4b55-9eb1-fe5a86d3cae8-kube-api-access-r9jvz\") pod \"infra-operator-controller-manager-786bd545f6-8hp88\" (UID: \"0535be64-bda6-4b55-9eb1-fe5a86d3cae8\") " pod="openstack-operators/infra-operator-controller-manager-786bd545f6-8hp88" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.027483 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fztmb\" (UniqueName: \"kubernetes.io/projected/9fe3aab0-3f3b-4fb3-a5da-2206ba55e813-kube-api-access-fztmb\") pod \"heat-operator-controller-manager-cf99c678f-2srvx\" (UID: \"9fe3aab0-3f3b-4fb3-a5da-2206ba55e813\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-2srvx" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.049772 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5589\" (UniqueName: \"kubernetes.io/projected/dd26043c-48bc-4202-8266-d2590b6530e3-kube-api-access-h5589\") pod \"ironic-operator-controller-manager-545456dc4-7bmg5\" (UID: \"dd26043c-48bc-4202-8266-d2590b6530e3\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-7bmg5" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.050575 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-5t42k" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.062480 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws84l\" (UniqueName: \"kubernetes.io/projected/42fc68c6-e92f-4449-9398-518f904c58fb-kube-api-access-ws84l\") pod \"keystone-operator-controller-manager-7c789f89c6-cfb47\" (UID: \"42fc68c6-e92f-4449-9398-518f904c58fb\") " pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-cfb47" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.086106 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-2srvx" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.104848 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-cfb47" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.106550 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-ppf6c" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.113818 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-pg7jw"] Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.114900 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-pg7jw" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.119176 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-xlhbf" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.120203 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht7d4\" (UniqueName: \"kubernetes.io/projected/5189b3c2-1b93-432b-b1a3-dc579ef2abb6-kube-api-access-ht7d4\") pod \"manila-operator-controller-manager-67d996989d-gm8rn\" (UID: \"5189b3c2-1b93-432b-b1a3-dc579ef2abb6\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-gm8rn" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.120291 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnz4f\" (UniqueName: \"kubernetes.io/projected/d56f3210-6165-4bd1-b2e0-d8eb94b370a9-kube-api-access-fnz4f\") pod \"nova-operator-controller-manager-74b6b5dc96-rcb7d\" (UID: \"d56f3210-6165-4bd1-b2e0-d8eb94b370a9\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-rcb7d" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.120355 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9wns\" (UniqueName: \"kubernetes.io/projected/f5555801-1739-45d3-946f-3b731b87c593-kube-api-access-z9wns\") pod \"neutron-operator-controller-manager-54688575f-xpd29\" (UID: \"f5555801-1739-45d3-946f-3b731b87c593\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-xpd29" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.120395 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k2pd\" (UniqueName: \"kubernetes.io/projected/5f8b5a91-a57a-4679-a625-007592105038-kube-api-access-6k2pd\") pod \"mariadb-operator-controller-manager-7b6bfb6475-s4j6f\" (UID: \"5f8b5a91-a57a-4679-a625-007592105038\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-s4j6f" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.127029 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-pg7jw"] Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.141881 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-pl8nn"] Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.142717 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-pl8nn" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.146089 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-m964z" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.148032 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-7bmg5" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.151538 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9wns\" (UniqueName: \"kubernetes.io/projected/f5555801-1739-45d3-946f-3b731b87c593-kube-api-access-z9wns\") pod \"neutron-operator-controller-manager-54688575f-xpd29\" (UID: \"f5555801-1739-45d3-946f-3b731b87c593\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-xpd29" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.151649 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-gjdqb"] Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.152616 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-gjdqb" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.163189 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht7d4\" (UniqueName: \"kubernetes.io/projected/5189b3c2-1b93-432b-b1a3-dc579ef2abb6-kube-api-access-ht7d4\") pod \"manila-operator-controller-manager-67d996989d-gm8rn\" (UID: \"5189b3c2-1b93-432b-b1a3-dc579ef2abb6\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-gm8rn" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.164322 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-g6p7t" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.164685 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-gm8rn" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.168501 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k2pd\" (UniqueName: \"kubernetes.io/projected/5f8b5a91-a57a-4679-a625-007592105038-kube-api-access-6k2pd\") pod \"mariadb-operator-controller-manager-7b6bfb6475-s4j6f\" (UID: \"5f8b5a91-a57a-4679-a625-007592105038\") " pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-s4j6f" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.221231 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-pl8nn"] Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.221279 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj"] Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.221993 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-gjdqb"] Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.222015 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-snccq"] Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.222517 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-snccq" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.223155 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbtk2\" (UniqueName: \"kubernetes.io/projected/67e3c7dc-a78f-4039-b326-93795dd322ca-kube-api-access-jbtk2\") pod \"ovn-operator-controller-manager-75684d597f-pg7jw\" (UID: \"67e3c7dc-a78f-4039-b326-93795dd322ca\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-pg7jw" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.223207 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbjwz\" (UniqueName: \"kubernetes.io/projected/07f212b7-6aea-4a43-95fa-4637b6dc1d87-kube-api-access-wbjwz\") pod \"placement-operator-controller-manager-648564c9fc-gjdqb\" (UID: \"07f212b7-6aea-4a43-95fa-4637b6dc1d87\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-gjdqb" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.223249 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48mvp\" (UniqueName: \"kubernetes.io/projected/895709de-d62e-4101-8294-d73238790d9c-kube-api-access-48mvp\") pod \"octavia-operator-controller-manager-5d86c7ddb7-pl8nn\" (UID: \"895709de-d62e-4101-8294-d73238790d9c\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-pl8nn" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.223317 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnz4f\" (UniqueName: \"kubernetes.io/projected/d56f3210-6165-4bd1-b2e0-d8eb94b370a9-kube-api-access-fnz4f\") pod \"nova-operator-controller-manager-74b6b5dc96-rcb7d\" (UID: \"d56f3210-6165-4bd1-b2e0-d8eb94b370a9\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-rcb7d" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.226680 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-s4j6f" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.233101 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.236153 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.236252 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-lrpsj" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.244264 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fdb694969-82d7x"] Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.245311 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-82d7x" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.248755 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-kpgjv" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.249092 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-768cr" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.264040 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnz4f\" (UniqueName: \"kubernetes.io/projected/d56f3210-6165-4bd1-b2e0-d8eb94b370a9-kube-api-access-fnz4f\") pod \"nova-operator-controller-manager-74b6b5dc96-rcb7d\" (UID: \"d56f3210-6165-4bd1-b2e0-d8eb94b370a9\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-rcb7d" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.273867 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-snccq"] Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.276759 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54688575f-xpd29" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.300130 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj"] Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.300502 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fdb694969-82d7x"] Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.324534 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7g8l\" (UniqueName: \"kubernetes.io/projected/7c84fa60-3777-4544-84ce-abc199e9df18-kube-api-access-k7g8l\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj\" (UID: \"7c84fa60-3777-4544-84ce-abc199e9df18\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.324775 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z77b\" (UniqueName: \"kubernetes.io/projected/1089c9f7-0d91-4639-9890-c41acc881797-kube-api-access-4z77b\") pod \"telemetry-operator-controller-manager-5fdb694969-82d7x\" (UID: \"1089c9f7-0d91-4639-9890-c41acc881797\") " pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-82d7x" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.324905 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbtk2\" (UniqueName: \"kubernetes.io/projected/67e3c7dc-a78f-4039-b326-93795dd322ca-kube-api-access-jbtk2\") pod \"ovn-operator-controller-manager-75684d597f-pg7jw\" (UID: \"67e3c7dc-a78f-4039-b326-93795dd322ca\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-pg7jw" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.324992 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c84fa60-3777-4544-84ce-abc199e9df18-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj\" (UID: \"7c84fa60-3777-4544-84ce-abc199e9df18\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.325093 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbjwz\" (UniqueName: \"kubernetes.io/projected/07f212b7-6aea-4a43-95fa-4637b6dc1d87-kube-api-access-wbjwz\") pod \"placement-operator-controller-manager-648564c9fc-gjdqb\" (UID: \"07f212b7-6aea-4a43-95fa-4637b6dc1d87\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-gjdqb" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.325176 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48mvp\" (UniqueName: \"kubernetes.io/projected/895709de-d62e-4101-8294-d73238790d9c-kube-api-access-48mvp\") pod \"octavia-operator-controller-manager-5d86c7ddb7-pl8nn\" (UID: \"895709de-d62e-4101-8294-d73238790d9c\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-pl8nn" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.325263 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8r7n\" (UniqueName: \"kubernetes.io/projected/f254469c-2cb3-4f38-8c52-960aa17d27fe-kube-api-access-j8r7n\") pod \"swift-operator-controller-manager-9b9ff9f4d-snccq\" (UID: \"f254469c-2cb3-4f38-8c52-960aa17d27fe\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-snccq" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.332794 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-clg6t"] Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.333708 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-clg6t" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.341393 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-clg6t"] Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.361445 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-975zn"] Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.367776 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-975zn"] Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.367924 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-975zn" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.375516 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-hms56" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.376078 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-7rz69" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.398725 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbjwz\" (UniqueName: \"kubernetes.io/projected/07f212b7-6aea-4a43-95fa-4637b6dc1d87-kube-api-access-wbjwz\") pod \"placement-operator-controller-manager-648564c9fc-gjdqb\" (UID: \"07f212b7-6aea-4a43-95fa-4637b6dc1d87\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-gjdqb" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.407665 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbtk2\" (UniqueName: \"kubernetes.io/projected/67e3c7dc-a78f-4039-b326-93795dd322ca-kube-api-access-jbtk2\") pod \"ovn-operator-controller-manager-75684d597f-pg7jw\" (UID: \"67e3c7dc-a78f-4039-b326-93795dd322ca\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-pg7jw" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.436788 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48mvp\" (UniqueName: \"kubernetes.io/projected/895709de-d62e-4101-8294-d73238790d9c-kube-api-access-48mvp\") pod \"octavia-operator-controller-manager-5d86c7ddb7-pl8nn\" (UID: \"895709de-d62e-4101-8294-d73238790d9c\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-pl8nn" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.443182 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d62g\" (UniqueName: \"kubernetes.io/projected/d9ccc996-b3d9-44f1-8a6e-c58517885a7c-kube-api-access-4d62g\") pod \"test-operator-controller-manager-55b5ff4dbb-clg6t\" (UID: \"d9ccc996-b3d9-44f1-8a6e-c58517885a7c\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-clg6t" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.443263 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c84fa60-3777-4544-84ce-abc199e9df18-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj\" (UID: \"7c84fa60-3777-4544-84ce-abc199e9df18\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.443303 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdb4r\" (UniqueName: \"kubernetes.io/projected/f229b3d6-46dd-42ab-bb96-c207b02b35d0-kube-api-access-vdb4r\") pod \"watcher-operator-controller-manager-bccc79885-975zn\" (UID: \"f229b3d6-46dd-42ab-bb96-c207b02b35d0\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-975zn" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.443357 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8r7n\" (UniqueName: \"kubernetes.io/projected/f254469c-2cb3-4f38-8c52-960aa17d27fe-kube-api-access-j8r7n\") pod \"swift-operator-controller-manager-9b9ff9f4d-snccq\" (UID: \"f254469c-2cb3-4f38-8c52-960aa17d27fe\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-snccq" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.443423 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0535be64-bda6-4b55-9eb1-fe5a86d3cae8-cert\") pod \"infra-operator-controller-manager-786bd545f6-8hp88\" (UID: \"0535be64-bda6-4b55-9eb1-fe5a86d3cae8\") " pod="openstack-operators/infra-operator-controller-manager-786bd545f6-8hp88" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.443497 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7g8l\" (UniqueName: \"kubernetes.io/projected/7c84fa60-3777-4544-84ce-abc199e9df18-kube-api-access-k7g8l\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj\" (UID: \"7c84fa60-3777-4544-84ce-abc199e9df18\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.443536 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z77b\" (UniqueName: \"kubernetes.io/projected/1089c9f7-0d91-4639-9890-c41acc881797-kube-api-access-4z77b\") pod \"telemetry-operator-controller-manager-5fdb694969-82d7x\" (UID: \"1089c9f7-0d91-4639-9890-c41acc881797\") " pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-82d7x" Feb 28 04:50:22 crc kubenswrapper[5014]: E0228 04:50:22.444040 5014 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 04:50:22 crc kubenswrapper[5014]: E0228 04:50:22.444095 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c84fa60-3777-4544-84ce-abc199e9df18-cert podName:7c84fa60-3777-4544-84ce-abc199e9df18 nodeName:}" failed. No retries permitted until 2026-02-28 04:50:22.944076485 +0000 UTC m=+1011.614202395 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7c84fa60-3777-4544-84ce-abc199e9df18-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" (UID: "7c84fa60-3777-4544-84ce-abc199e9df18") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 04:50:22 crc kubenswrapper[5014]: E0228 04:50:22.455553 5014 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 28 04:50:22 crc kubenswrapper[5014]: E0228 04:50:22.455613 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0535be64-bda6-4b55-9eb1-fe5a86d3cae8-cert podName:0535be64-bda6-4b55-9eb1-fe5a86d3cae8 nodeName:}" failed. No retries permitted until 2026-02-28 04:50:23.455596196 +0000 UTC m=+1012.125722106 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0535be64-bda6-4b55-9eb1-fe5a86d3cae8-cert") pod "infra-operator-controller-manager-786bd545f6-8hp88" (UID: "0535be64-bda6-4b55-9eb1-fe5a86d3cae8") : secret "infra-operator-webhook-server-cert" not found Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.457268 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-rcb7d" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.474276 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z77b\" (UniqueName: \"kubernetes.io/projected/1089c9f7-0d91-4639-9890-c41acc881797-kube-api-access-4z77b\") pod \"telemetry-operator-controller-manager-5fdb694969-82d7x\" (UID: \"1089c9f7-0d91-4639-9890-c41acc881797\") " pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-82d7x" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.479694 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8r7n\" (UniqueName: \"kubernetes.io/projected/f254469c-2cb3-4f38-8c52-960aa17d27fe-kube-api-access-j8r7n\") pod \"swift-operator-controller-manager-9b9ff9f4d-snccq\" (UID: \"f254469c-2cb3-4f38-8c52-960aa17d27fe\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-snccq" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.491797 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7g8l\" (UniqueName: \"kubernetes.io/projected/7c84fa60-3777-4544-84ce-abc199e9df18-kube-api-access-k7g8l\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj\" (UID: \"7c84fa60-3777-4544-84ce-abc199e9df18\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.512381 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-pg7jw" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.544972 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d62g\" (UniqueName: \"kubernetes.io/projected/d9ccc996-b3d9-44f1-8a6e-c58517885a7c-kube-api-access-4d62g\") pod \"test-operator-controller-manager-55b5ff4dbb-clg6t\" (UID: \"d9ccc996-b3d9-44f1-8a6e-c58517885a7c\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-clg6t" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.545061 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdb4r\" (UniqueName: \"kubernetes.io/projected/f229b3d6-46dd-42ab-bb96-c207b02b35d0-kube-api-access-vdb4r\") pod \"watcher-operator-controller-manager-bccc79885-975zn\" (UID: \"f229b3d6-46dd-42ab-bb96-c207b02b35d0\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-975zn" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.549214 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-pl8nn" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.561786 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5"] Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.562991 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.565966 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.569769 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-gjdqb" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.570255 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-zfp8r" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.570562 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.591635 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5"] Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.600457 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdb4r\" (UniqueName: \"kubernetes.io/projected/f229b3d6-46dd-42ab-bb96-c207b02b35d0-kube-api-access-vdb4r\") pod \"watcher-operator-controller-manager-bccc79885-975zn\" (UID: \"f229b3d6-46dd-42ab-bb96-c207b02b35d0\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-975zn" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.601011 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d62g\" (UniqueName: \"kubernetes.io/projected/d9ccc996-b3d9-44f1-8a6e-c58517885a7c-kube-api-access-4d62g\") pod \"test-operator-controller-manager-55b5ff4dbb-clg6t\" (UID: \"d9ccc996-b3d9-44f1-8a6e-c58517885a7c\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-clg6t" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.649516 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-metrics-certs\") pod \"openstack-operator-controller-manager-76974fc5d7-9d7k5\" (UID: \"b65e9823-17a7-42da-9191-af1db70355b9\") " pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.649816 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-webhook-certs\") pod \"openstack-operator-controller-manager-76974fc5d7-9d7k5\" (UID: \"b65e9823-17a7-42da-9191-af1db70355b9\") " pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.650002 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xq9p\" (UniqueName: \"kubernetes.io/projected/b65e9823-17a7-42da-9191-af1db70355b9-kube-api-access-4xq9p\") pod \"openstack-operator-controller-manager-76974fc5d7-9d7k5\" (UID: \"b65e9823-17a7-42da-9191-af1db70355b9\") " pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.663878 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c4ptt"] Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.666708 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c4ptt" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.671293 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-c8926" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.674409 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-snccq" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.676619 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c4ptt"] Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.750870 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7kcr\" (UniqueName: \"kubernetes.io/projected/90ad3ca4-2470-4ab2-9e22-17db53a7237d-kube-api-access-m7kcr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-c4ptt\" (UID: \"90ad3ca4-2470-4ab2-9e22-17db53a7237d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c4ptt" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.750922 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xq9p\" (UniqueName: \"kubernetes.io/projected/b65e9823-17a7-42da-9191-af1db70355b9-kube-api-access-4xq9p\") pod \"openstack-operator-controller-manager-76974fc5d7-9d7k5\" (UID: \"b65e9823-17a7-42da-9191-af1db70355b9\") " pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.751175 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-metrics-certs\") pod \"openstack-operator-controller-manager-76974fc5d7-9d7k5\" (UID: \"b65e9823-17a7-42da-9191-af1db70355b9\") " pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.751317 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-webhook-certs\") pod \"openstack-operator-controller-manager-76974fc5d7-9d7k5\" (UID: \"b65e9823-17a7-42da-9191-af1db70355b9\") " pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:50:22 crc kubenswrapper[5014]: E0228 04:50:22.751440 5014 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 28 04:50:22 crc kubenswrapper[5014]: E0228 04:50:22.751488 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-webhook-certs podName:b65e9823-17a7-42da-9191-af1db70355b9 nodeName:}" failed. No retries permitted until 2026-02-28 04:50:23.25147339 +0000 UTC m=+1011.921599300 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-webhook-certs") pod "openstack-operator-controller-manager-76974fc5d7-9d7k5" (UID: "b65e9823-17a7-42da-9191-af1db70355b9") : secret "webhook-server-cert" not found Feb 28 04:50:22 crc kubenswrapper[5014]: E0228 04:50:22.752839 5014 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 28 04:50:22 crc kubenswrapper[5014]: E0228 04:50:22.752882 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-metrics-certs podName:b65e9823-17a7-42da-9191-af1db70355b9 nodeName:}" failed. No retries permitted until 2026-02-28 04:50:23.252872597 +0000 UTC m=+1011.922998507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-metrics-certs") pod "openstack-operator-controller-manager-76974fc5d7-9d7k5" (UID: "b65e9823-17a7-42da-9191-af1db70355b9") : secret "metrics-server-cert" not found Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.782553 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xq9p\" (UniqueName: \"kubernetes.io/projected/b65e9823-17a7-42da-9191-af1db70355b9-kube-api-access-4xq9p\") pod \"openstack-operator-controller-manager-76974fc5d7-9d7k5\" (UID: \"b65e9823-17a7-42da-9191-af1db70355b9\") " pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.797481 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-82d7x" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.852338 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-clg6t" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.852430 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7kcr\" (UniqueName: \"kubernetes.io/projected/90ad3ca4-2470-4ab2-9e22-17db53a7237d-kube-api-access-m7kcr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-c4ptt\" (UID: \"90ad3ca4-2470-4ab2-9e22-17db53a7237d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c4ptt" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.860407 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-p2g4k"] Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.883601 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7kcr\" (UniqueName: \"kubernetes.io/projected/90ad3ca4-2470-4ab2-9e22-17db53a7237d-kube-api-access-m7kcr\") pod \"rabbitmq-cluster-operator-manager-668c99d594-c4ptt\" (UID: \"90ad3ca4-2470-4ab2-9e22-17db53a7237d\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c4ptt" Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.956381 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c84fa60-3777-4544-84ce-abc199e9df18-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj\" (UID: \"7c84fa60-3777-4544-84ce-abc199e9df18\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" Feb 28 04:50:22 crc kubenswrapper[5014]: E0228 04:50:22.956467 5014 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 04:50:22 crc kubenswrapper[5014]: E0228 04:50:22.956548 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c84fa60-3777-4544-84ce-abc199e9df18-cert podName:7c84fa60-3777-4544-84ce-abc199e9df18 nodeName:}" failed. No retries permitted until 2026-02-28 04:50:23.956528382 +0000 UTC m=+1012.626654292 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7c84fa60-3777-4544-84ce-abc199e9df18-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" (UID: "7c84fa60-3777-4544-84ce-abc199e9df18") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 04:50:22 crc kubenswrapper[5014]: I0228 04:50:22.962013 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-n9r5r"] Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.085875 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-975zn" Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.170791 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c4ptt" Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.268406 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-metrics-certs\") pod \"openstack-operator-controller-manager-76974fc5d7-9d7k5\" (UID: \"b65e9823-17a7-42da-9191-af1db70355b9\") " pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.268514 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-webhook-certs\") pod \"openstack-operator-controller-manager-76974fc5d7-9d7k5\" (UID: \"b65e9823-17a7-42da-9191-af1db70355b9\") " pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:50:23 crc kubenswrapper[5014]: E0228 04:50:23.268605 5014 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 28 04:50:23 crc kubenswrapper[5014]: E0228 04:50:23.268673 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-metrics-certs podName:b65e9823-17a7-42da-9191-af1db70355b9 nodeName:}" failed. No retries permitted until 2026-02-28 04:50:24.268656234 +0000 UTC m=+1012.938782134 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-metrics-certs") pod "openstack-operator-controller-manager-76974fc5d7-9d7k5" (UID: "b65e9823-17a7-42da-9191-af1db70355b9") : secret "metrics-server-cert" not found Feb 28 04:50:23 crc kubenswrapper[5014]: E0228 04:50:23.269076 5014 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 28 04:50:23 crc kubenswrapper[5014]: E0228 04:50:23.269104 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-webhook-certs podName:b65e9823-17a7-42da-9191-af1db70355b9 nodeName:}" failed. No retries permitted until 2026-02-28 04:50:24.269095516 +0000 UTC m=+1012.939221426 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-webhook-certs") pod "openstack-operator-controller-manager-76974fc5d7-9d7k5" (UID: "b65e9823-17a7-42da-9191-af1db70355b9") : secret "webhook-server-cert" not found Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.357593 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-7bmg5"] Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.370498 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7c789f89c6-cfb47"] Feb 28 04:50:23 crc kubenswrapper[5014]: W0228 04:50:23.375445 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42fc68c6_e92f_4449_9398_518f904c58fb.slice/crio-a201f37d6d6444f836172b75cb73b58f057c77c0177df8da85313d812f213dff WatchSource:0}: Error finding container a201f37d6d6444f836172b75cb73b58f057c77c0177df8da85313d812f213dff: Status 404 returned error can't find the container with id a201f37d6d6444f836172b75cb73b58f057c77c0177df8da85313d812f213dff Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.375985 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-xpd29"] Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.471116 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0535be64-bda6-4b55-9eb1-fe5a86d3cae8-cert\") pod \"infra-operator-controller-manager-786bd545f6-8hp88\" (UID: \"0535be64-bda6-4b55-9eb1-fe5a86d3cae8\") " pod="openstack-operators/infra-operator-controller-manager-786bd545f6-8hp88" Feb 28 04:50:23 crc kubenswrapper[5014]: E0228 04:50:23.471434 5014 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 28 04:50:23 crc kubenswrapper[5014]: E0228 04:50:23.471533 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0535be64-bda6-4b55-9eb1-fe5a86d3cae8-cert podName:0535be64-bda6-4b55-9eb1-fe5a86d3cae8 nodeName:}" failed. No retries permitted until 2026-02-28 04:50:25.471506697 +0000 UTC m=+1014.141632677 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0535be64-bda6-4b55-9eb1-fe5a86d3cae8-cert") pod "infra-operator-controller-manager-786bd545f6-8hp88" (UID: "0535be64-bda6-4b55-9eb1-fe5a86d3cae8") : secret "infra-operator-webhook-server-cert" not found Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.489782 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-7bmg5" event={"ID":"dd26043c-48bc-4202-8266-d2590b6530e3","Type":"ContainerStarted","Data":"ec32c66204c82301725ee8017df2c613ff859ffa50849c3a30ca252131b6edb9"} Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.491052 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-n9r5r" event={"ID":"385767a3-7908-4f17-9f63-ea25c784c715","Type":"ContainerStarted","Data":"36fe913b6fd9f66946b31ad131c8d4f7ab0d0a8f10f68c880960afd236f7d8a2"} Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.492095 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-cfb47" event={"ID":"42fc68c6-e92f-4449-9398-518f904c58fb","Type":"ContainerStarted","Data":"a201f37d6d6444f836172b75cb73b58f057c77c0177df8da85313d812f213dff"} Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.493103 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-xpd29" event={"ID":"f5555801-1739-45d3-946f-3b731b87c593","Type":"ContainerStarted","Data":"f053e022fb23f171d74f5d78cce9b98b44dd6f0d2b3679ce682382004275e15a"} Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.497773 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-p2g4k" event={"ID":"7dfedb71-1284-4e5c-826d-efb134b34cdb","Type":"ContainerStarted","Data":"2588c52ac7ac172192245b46da44d2de1baf62016e953e45d5ae462b90993044"} Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.764832 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-pg7jw"] Feb 28 04:50:23 crc kubenswrapper[5014]: W0228 04:50:23.782834 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67e3c7dc_a78f_4039_b326_93795dd322ca.slice/crio-c8e11442907439ca4bf8951edfb7ffea1f8c3f3bcdfa4655443cd45b9e89d573 WatchSource:0}: Error finding container c8e11442907439ca4bf8951edfb7ffea1f8c3f3bcdfa4655443cd45b9e89d573: Status 404 returned error can't find the container with id c8e11442907439ca4bf8951edfb7ffea1f8c3f3bcdfa4655443cd45b9e89d573 Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.796366 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-gjdqb"] Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.813119 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-ppf6c"] Feb 28 04:50:23 crc kubenswrapper[5014]: W0228 04:50:23.816593 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9fe3aab0_3f3b_4fb3_a5da_2206ba55e813.slice/crio-e01c4e0488719c3de79ad4b679d1f375c8f72f5c64ccdf5d250e61ac5b157f39 WatchSource:0}: Error finding container e01c4e0488719c3de79ad4b679d1f375c8f72f5c64ccdf5d250e61ac5b157f39: Status 404 returned error can't find the container with id e01c4e0488719c3de79ad4b679d1f375c8f72f5c64ccdf5d250e61ac5b157f39 Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.826071 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-s4j6f"] Feb 28 04:50:23 crc kubenswrapper[5014]: W0228 04:50:23.837115 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f8b5a91_a57a_4679_a625_007592105038.slice/crio-195f183ced6e1de35c45098aedca9f2507e821880f2010bdc806b392fe6bd862 WatchSource:0}: Error finding container 195f183ced6e1de35c45098aedca9f2507e821880f2010bdc806b392fe6bd862: Status 404 returned error can't find the container with id 195f183ced6e1de35c45098aedca9f2507e821880f2010bdc806b392fe6bd862 Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.838980 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-2srvx"] Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.847844 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-pl8nn"] Feb 28 04:50:23 crc kubenswrapper[5014]: W0228 04:50:23.859486 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b9d913b_e0e8_42f5_8d98_60fd3c219ff8.slice/crio-68b20e0460e9ce199d0cd0e42c1df11e24ebfa4b759ff2d72375c979a3c7423c WatchSource:0}: Error finding container 68b20e0460e9ce199d0cd0e42c1df11e24ebfa4b759ff2d72375c979a3c7423c: Status 404 returned error can't find the container with id 68b20e0460e9ce199d0cd0e42c1df11e24ebfa4b759ff2d72375c979a3c7423c Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.863874 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-rcb7d"] Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.885128 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-587tn"] Feb 28 04:50:23 crc kubenswrapper[5014]: E0228 04:50:23.903868 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-j8r7n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9b9ff9f4d-snccq_openstack-operators(f254469c-2cb3-4f38-8c52-960aa17d27fe): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 04:50:23 crc kubenswrapper[5014]: E0228 04:50:23.905181 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-snccq" podUID="f254469c-2cb3-4f38-8c52-960aa17d27fe" Feb 28 04:50:23 crc kubenswrapper[5014]: E0228 04:50:23.905648 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4z77b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5fdb694969-82d7x_openstack-operators(1089c9f7-0d91-4639-9890-c41acc881797): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 04:50:23 crc kubenswrapper[5014]: E0228 04:50:23.911434 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-82d7x" podUID="1089c9f7-0d91-4639-9890-c41acc881797" Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.915242 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-5t42k"] Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.927936 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-gm8rn"] Feb 28 04:50:23 crc kubenswrapper[5014]: E0228 04:50:23.930843 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vdb4r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-bccc79885-975zn_openstack-operators(f229b3d6-46dd-42ab-bb96-c207b02b35d0): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 04:50:23 crc kubenswrapper[5014]: E0228 04:50:23.931221 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:81e43c058d9af1d3bc31704010c630bc2a574c2ee388aa0ffe8c7b9621a7d051,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-njc4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-64db6967f8-5t42k_openstack-operators(f734a97b-b94d-4132-a426-15111b3fc207): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 04:50:23 crc kubenswrapper[5014]: E0228 04:50:23.931230 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4d62g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-55b5ff4dbb-clg6t_openstack-operators(d9ccc996-b3d9-44f1-8a6e-c58517885a7c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 04:50:23 crc kubenswrapper[5014]: E0228 04:50:23.950375 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-975zn" podUID="f229b3d6-46dd-42ab-bb96-c207b02b35d0" Feb 28 04:50:23 crc kubenswrapper[5014]: W0228 04:50:23.950470 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod90ad3ca4_2470_4ab2_9e22_17db53a7237d.slice/crio-481fc1db01f76ae8ae5c4670963757346f938b9870c8abfaeb1d25cb8a7e7eed WatchSource:0}: Error finding container 481fc1db01f76ae8ae5c4670963757346f938b9870c8abfaeb1d25cb8a7e7eed: Status 404 returned error can't find the container with id 481fc1db01f76ae8ae5c4670963757346f938b9870c8abfaeb1d25cb8a7e7eed Feb 28 04:50:23 crc kubenswrapper[5014]: E0228 04:50:23.950496 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-5t42k" podUID="f734a97b-b94d-4132-a426-15111b3fc207" Feb 28 04:50:23 crc kubenswrapper[5014]: E0228 04:50:23.952086 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-clg6t" podUID="d9ccc996-b3d9-44f1-8a6e-c58517885a7c" Feb 28 04:50:23 crc kubenswrapper[5014]: E0228 04:50:23.953462 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m7kcr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-c4ptt_openstack-operators(90ad3ca4-2470-4ab2-9e22-17db53a7237d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.953555 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-clg6t"] Feb 28 04:50:23 crc kubenswrapper[5014]: E0228 04:50:23.958052 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c4ptt" podUID="90ad3ca4-2470-4ab2-9e22-17db53a7237d" Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.958966 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-snccq"] Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.963660 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5fdb694969-82d7x"] Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.975315 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c4ptt"] Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.976917 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c84fa60-3777-4544-84ce-abc199e9df18-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj\" (UID: \"7c84fa60-3777-4544-84ce-abc199e9df18\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" Feb 28 04:50:23 crc kubenswrapper[5014]: E0228 04:50:23.977072 5014 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 04:50:23 crc kubenswrapper[5014]: E0228 04:50:23.977129 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c84fa60-3777-4544-84ce-abc199e9df18-cert podName:7c84fa60-3777-4544-84ce-abc199e9df18 nodeName:}" failed. No retries permitted until 2026-02-28 04:50:25.977115549 +0000 UTC m=+1014.647241459 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7c84fa60-3777-4544-84ce-abc199e9df18-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" (UID: "7c84fa60-3777-4544-84ce-abc199e9df18") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 04:50:23 crc kubenswrapper[5014]: I0228 04:50:23.986867 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-975zn"] Feb 28 04:50:24 crc kubenswrapper[5014]: I0228 04:50:24.282147 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-metrics-certs\") pod \"openstack-operator-controller-manager-76974fc5d7-9d7k5\" (UID: \"b65e9823-17a7-42da-9191-af1db70355b9\") " pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:50:24 crc kubenswrapper[5014]: I0228 04:50:24.282256 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-webhook-certs\") pod \"openstack-operator-controller-manager-76974fc5d7-9d7k5\" (UID: \"b65e9823-17a7-42da-9191-af1db70355b9\") " pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:50:24 crc kubenswrapper[5014]: E0228 04:50:24.282385 5014 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 28 04:50:24 crc kubenswrapper[5014]: E0228 04:50:24.282428 5014 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 28 04:50:24 crc kubenswrapper[5014]: E0228 04:50:24.282436 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-webhook-certs podName:b65e9823-17a7-42da-9191-af1db70355b9 nodeName:}" failed. No retries permitted until 2026-02-28 04:50:26.282418996 +0000 UTC m=+1014.952544906 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-webhook-certs") pod "openstack-operator-controller-manager-76974fc5d7-9d7k5" (UID: "b65e9823-17a7-42da-9191-af1db70355b9") : secret "webhook-server-cert" not found Feb 28 04:50:24 crc kubenswrapper[5014]: E0228 04:50:24.282518 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-metrics-certs podName:b65e9823-17a7-42da-9191-af1db70355b9 nodeName:}" failed. No retries permitted until 2026-02-28 04:50:26.282497358 +0000 UTC m=+1014.952623338 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-metrics-certs") pod "openstack-operator-controller-manager-76974fc5d7-9d7k5" (UID: "b65e9823-17a7-42da-9191-af1db70355b9") : secret "metrics-server-cert" not found Feb 28 04:50:24 crc kubenswrapper[5014]: I0228 04:50:24.509677 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-snccq" event={"ID":"f254469c-2cb3-4f38-8c52-960aa17d27fe","Type":"ContainerStarted","Data":"6fb66f148483691837b5fad8fdfb4a0b9e7a95749575fabd9f4b79985794909c"} Feb 28 04:50:24 crc kubenswrapper[5014]: I0228 04:50:24.510716 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-pl8nn" event={"ID":"895709de-d62e-4101-8294-d73238790d9c","Type":"ContainerStarted","Data":"fadd71792882b2d5bed5fdf77827ed9c2a13ff82446cd8623a30531c2cbe05c3"} Feb 28 04:50:24 crc kubenswrapper[5014]: E0228 04:50:24.510799 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-snccq" podUID="f254469c-2cb3-4f38-8c52-960aa17d27fe" Feb 28 04:50:24 crc kubenswrapper[5014]: I0228 04:50:24.515009 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-pg7jw" event={"ID":"67e3c7dc-a78f-4039-b326-93795dd322ca","Type":"ContainerStarted","Data":"c8e11442907439ca4bf8951edfb7ffea1f8c3f3bcdfa4655443cd45b9e89d573"} Feb 28 04:50:24 crc kubenswrapper[5014]: I0228 04:50:24.515106 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-gm8rn" event={"ID":"5189b3c2-1b93-432b-b1a3-dc579ef2abb6","Type":"ContainerStarted","Data":"8d7c9527a600ed76282d2ef866578abebcb9aa7006868f1872fad95901c78c0d"} Feb 28 04:50:24 crc kubenswrapper[5014]: I0228 04:50:24.515124 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-82d7x" event={"ID":"1089c9f7-0d91-4639-9890-c41acc881797","Type":"ContainerStarted","Data":"fa339f99ea69dbdb27aab69f6a94f108e798e2e52eb4047a40a400d9e02c806e"} Feb 28 04:50:24 crc kubenswrapper[5014]: I0228 04:50:24.516170 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-ppf6c" event={"ID":"5b9d913b-e0e8-42f5-8d98-60fd3c219ff8","Type":"ContainerStarted","Data":"68b20e0460e9ce199d0cd0e42c1df11e24ebfa4b759ff2d72375c979a3c7423c"} Feb 28 04:50:24 crc kubenswrapper[5014]: E0228 04:50:24.516447 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-82d7x" podUID="1089c9f7-0d91-4639-9890-c41acc881797" Feb 28 04:50:24 crc kubenswrapper[5014]: I0228 04:50:24.520036 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-clg6t" event={"ID":"d9ccc996-b3d9-44f1-8a6e-c58517885a7c","Type":"ContainerStarted","Data":"5c4ab73df44ec5309fccd035daacc288f512a1946aae7bba220b19be4044f1f4"} Feb 28 04:50:24 crc kubenswrapper[5014]: E0228 04:50:24.521193 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968\\\"\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-clg6t" podUID="d9ccc996-b3d9-44f1-8a6e-c58517885a7c" Feb 28 04:50:24 crc kubenswrapper[5014]: I0228 04:50:24.522520 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-rcb7d" event={"ID":"d56f3210-6165-4bd1-b2e0-d8eb94b370a9","Type":"ContainerStarted","Data":"47bbde52f867ca04c12ff91c64f3246425e50d7dada5fa6af98edbf7bb7c16c8"} Feb 28 04:50:24 crc kubenswrapper[5014]: I0228 04:50:24.524865 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-587tn" event={"ID":"52707aa4-b40d-4046-a721-e3b31a1f9648","Type":"ContainerStarted","Data":"3bfb3035a6819f6c05b80c7a04a6923e76b177dbe16f4e42dfea35fcd268763a"} Feb 28 04:50:24 crc kubenswrapper[5014]: I0228 04:50:24.533114 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-gjdqb" event={"ID":"07f212b7-6aea-4a43-95fa-4637b6dc1d87","Type":"ContainerStarted","Data":"cf655b77a1f48cb5766fc5ab18f23387c57d684b3846fb620e5ce0b90d4c2096"} Feb 28 04:50:24 crc kubenswrapper[5014]: I0228 04:50:24.535559 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c4ptt" event={"ID":"90ad3ca4-2470-4ab2-9e22-17db53a7237d","Type":"ContainerStarted","Data":"481fc1db01f76ae8ae5c4670963757346f938b9870c8abfaeb1d25cb8a7e7eed"} Feb 28 04:50:24 crc kubenswrapper[5014]: I0228 04:50:24.537668 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-2srvx" event={"ID":"9fe3aab0-3f3b-4fb3-a5da-2206ba55e813","Type":"ContainerStarted","Data":"e01c4e0488719c3de79ad4b679d1f375c8f72f5c64ccdf5d250e61ac5b157f39"} Feb 28 04:50:24 crc kubenswrapper[5014]: E0228 04:50:24.542437 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c4ptt" podUID="90ad3ca4-2470-4ab2-9e22-17db53a7237d" Feb 28 04:50:24 crc kubenswrapper[5014]: I0228 04:50:24.543370 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-975zn" event={"ID":"f229b3d6-46dd-42ab-bb96-c207b02b35d0","Type":"ContainerStarted","Data":"341830f132b043eb52fb8216077e488230a90ac06054ab45d1e8453514ef2678"} Feb 28 04:50:24 crc kubenswrapper[5014]: E0228 04:50:24.548705 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-975zn" podUID="f229b3d6-46dd-42ab-bb96-c207b02b35d0" Feb 28 04:50:24 crc kubenswrapper[5014]: I0228 04:50:24.548767 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-5t42k" event={"ID":"f734a97b-b94d-4132-a426-15111b3fc207","Type":"ContainerStarted","Data":"5cec7c0f623cb41c8f57f95df4a4bb4bf9a7ff040dbedb0b78258de151e57030"} Feb 28 04:50:24 crc kubenswrapper[5014]: I0228 04:50:24.550902 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-s4j6f" event={"ID":"5f8b5a91-a57a-4679-a625-007592105038","Type":"ContainerStarted","Data":"195f183ced6e1de35c45098aedca9f2507e821880f2010bdc806b392fe6bd862"} Feb 28 04:50:24 crc kubenswrapper[5014]: E0228 04:50:24.556297 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:81e43c058d9af1d3bc31704010c630bc2a574c2ee388aa0ffe8c7b9621a7d051\\\"\"" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-5t42k" podUID="f734a97b-b94d-4132-a426-15111b3fc207" Feb 28 04:50:25 crc kubenswrapper[5014]: I0228 04:50:25.510572 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0535be64-bda6-4b55-9eb1-fe5a86d3cae8-cert\") pod \"infra-operator-controller-manager-786bd545f6-8hp88\" (UID: \"0535be64-bda6-4b55-9eb1-fe5a86d3cae8\") " pod="openstack-operators/infra-operator-controller-manager-786bd545f6-8hp88" Feb 28 04:50:25 crc kubenswrapper[5014]: E0228 04:50:25.510843 5014 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 28 04:50:25 crc kubenswrapper[5014]: E0228 04:50:25.511037 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0535be64-bda6-4b55-9eb1-fe5a86d3cae8-cert podName:0535be64-bda6-4b55-9eb1-fe5a86d3cae8 nodeName:}" failed. No retries permitted until 2026-02-28 04:50:29.511016016 +0000 UTC m=+1018.181141926 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0535be64-bda6-4b55-9eb1-fe5a86d3cae8-cert") pod "infra-operator-controller-manager-786bd545f6-8hp88" (UID: "0535be64-bda6-4b55-9eb1-fe5a86d3cae8") : secret "infra-operator-webhook-server-cert" not found Feb 28 04:50:25 crc kubenswrapper[5014]: E0228 04:50:25.560640 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-975zn" podUID="f229b3d6-46dd-42ab-bb96-c207b02b35d0" Feb 28 04:50:25 crc kubenswrapper[5014]: E0228 04:50:25.560974 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968\\\"\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-clg6t" podUID="d9ccc996-b3d9-44f1-8a6e-c58517885a7c" Feb 28 04:50:25 crc kubenswrapper[5014]: E0228 04:50:25.561009 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:1b9074a4ce16396d8bd2d30a475fc8c2f004f75a023e3eef8950661e89c0bcc6\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-82d7x" podUID="1089c9f7-0d91-4639-9890-c41acc881797" Feb 28 04:50:25 crc kubenswrapper[5014]: E0228 04:50:25.561045 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:81e43c058d9af1d3bc31704010c630bc2a574c2ee388aa0ffe8c7b9621a7d051\\\"\"" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-5t42k" podUID="f734a97b-b94d-4132-a426-15111b3fc207" Feb 28 04:50:25 crc kubenswrapper[5014]: E0228 04:50:25.561507 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-snccq" podUID="f254469c-2cb3-4f38-8c52-960aa17d27fe" Feb 28 04:50:25 crc kubenswrapper[5014]: E0228 04:50:25.562230 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c4ptt" podUID="90ad3ca4-2470-4ab2-9e22-17db53a7237d" Feb 28 04:50:26 crc kubenswrapper[5014]: I0228 04:50:26.017787 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c84fa60-3777-4544-84ce-abc199e9df18-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj\" (UID: \"7c84fa60-3777-4544-84ce-abc199e9df18\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" Feb 28 04:50:26 crc kubenswrapper[5014]: E0228 04:50:26.018142 5014 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 04:50:26 crc kubenswrapper[5014]: E0228 04:50:26.018198 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c84fa60-3777-4544-84ce-abc199e9df18-cert podName:7c84fa60-3777-4544-84ce-abc199e9df18 nodeName:}" failed. No retries permitted until 2026-02-28 04:50:30.01818418 +0000 UTC m=+1018.688310090 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7c84fa60-3777-4544-84ce-abc199e9df18-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" (UID: "7c84fa60-3777-4544-84ce-abc199e9df18") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 04:50:26 crc kubenswrapper[5014]: I0228 04:50:26.322414 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-webhook-certs\") pod \"openstack-operator-controller-manager-76974fc5d7-9d7k5\" (UID: \"b65e9823-17a7-42da-9191-af1db70355b9\") " pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:50:26 crc kubenswrapper[5014]: I0228 04:50:26.322529 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-metrics-certs\") pod \"openstack-operator-controller-manager-76974fc5d7-9d7k5\" (UID: \"b65e9823-17a7-42da-9191-af1db70355b9\") " pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:50:26 crc kubenswrapper[5014]: E0228 04:50:26.322617 5014 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 28 04:50:26 crc kubenswrapper[5014]: E0228 04:50:26.322619 5014 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 28 04:50:26 crc kubenswrapper[5014]: E0228 04:50:26.322672 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-metrics-certs podName:b65e9823-17a7-42da-9191-af1db70355b9 nodeName:}" failed. No retries permitted until 2026-02-28 04:50:30.322656815 +0000 UTC m=+1018.992782725 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-metrics-certs") pod "openstack-operator-controller-manager-76974fc5d7-9d7k5" (UID: "b65e9823-17a7-42da-9191-af1db70355b9") : secret "metrics-server-cert" not found Feb 28 04:50:26 crc kubenswrapper[5014]: E0228 04:50:26.322684 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-webhook-certs podName:b65e9823-17a7-42da-9191-af1db70355b9 nodeName:}" failed. No retries permitted until 2026-02-28 04:50:30.322679606 +0000 UTC m=+1018.992805516 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-webhook-certs") pod "openstack-operator-controller-manager-76974fc5d7-9d7k5" (UID: "b65e9823-17a7-42da-9191-af1db70355b9") : secret "webhook-server-cert" not found Feb 28 04:50:29 crc kubenswrapper[5014]: I0228 04:50:29.581818 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0535be64-bda6-4b55-9eb1-fe5a86d3cae8-cert\") pod \"infra-operator-controller-manager-786bd545f6-8hp88\" (UID: \"0535be64-bda6-4b55-9eb1-fe5a86d3cae8\") " pod="openstack-operators/infra-operator-controller-manager-786bd545f6-8hp88" Feb 28 04:50:29 crc kubenswrapper[5014]: E0228 04:50:29.582014 5014 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 28 04:50:29 crc kubenswrapper[5014]: E0228 04:50:29.582296 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0535be64-bda6-4b55-9eb1-fe5a86d3cae8-cert podName:0535be64-bda6-4b55-9eb1-fe5a86d3cae8 nodeName:}" failed. No retries permitted until 2026-02-28 04:50:37.582276304 +0000 UTC m=+1026.252402204 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0535be64-bda6-4b55-9eb1-fe5a86d3cae8-cert") pod "infra-operator-controller-manager-786bd545f6-8hp88" (UID: "0535be64-bda6-4b55-9eb1-fe5a86d3cae8") : secret "infra-operator-webhook-server-cert" not found Feb 28 04:50:30 crc kubenswrapper[5014]: I0228 04:50:30.088400 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c84fa60-3777-4544-84ce-abc199e9df18-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj\" (UID: \"7c84fa60-3777-4544-84ce-abc199e9df18\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" Feb 28 04:50:30 crc kubenswrapper[5014]: E0228 04:50:30.088765 5014 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 04:50:30 crc kubenswrapper[5014]: E0228 04:50:30.088866 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c84fa60-3777-4544-84ce-abc199e9df18-cert podName:7c84fa60-3777-4544-84ce-abc199e9df18 nodeName:}" failed. No retries permitted until 2026-02-28 04:50:38.088849742 +0000 UTC m=+1026.758975652 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7c84fa60-3777-4544-84ce-abc199e9df18-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" (UID: "7c84fa60-3777-4544-84ce-abc199e9df18") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 04:50:30 crc kubenswrapper[5014]: I0228 04:50:30.393496 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-metrics-certs\") pod \"openstack-operator-controller-manager-76974fc5d7-9d7k5\" (UID: \"b65e9823-17a7-42da-9191-af1db70355b9\") " pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:50:30 crc kubenswrapper[5014]: I0228 04:50:30.393584 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-webhook-certs\") pod \"openstack-operator-controller-manager-76974fc5d7-9d7k5\" (UID: \"b65e9823-17a7-42da-9191-af1db70355b9\") " pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:50:30 crc kubenswrapper[5014]: E0228 04:50:30.393705 5014 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 28 04:50:30 crc kubenswrapper[5014]: E0228 04:50:30.393750 5014 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 28 04:50:30 crc kubenswrapper[5014]: E0228 04:50:30.393786 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-metrics-certs podName:b65e9823-17a7-42da-9191-af1db70355b9 nodeName:}" failed. No retries permitted until 2026-02-28 04:50:38.393767079 +0000 UTC m=+1027.063892989 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-metrics-certs") pod "openstack-operator-controller-manager-76974fc5d7-9d7k5" (UID: "b65e9823-17a7-42da-9191-af1db70355b9") : secret "metrics-server-cert" not found Feb 28 04:50:30 crc kubenswrapper[5014]: E0228 04:50:30.393882 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-webhook-certs podName:b65e9823-17a7-42da-9191-af1db70355b9 nodeName:}" failed. No retries permitted until 2026-02-28 04:50:38.39379394 +0000 UTC m=+1027.063919950 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-webhook-certs") pod "openstack-operator-controller-manager-76974fc5d7-9d7k5" (UID: "b65e9823-17a7-42da-9191-af1db70355b9") : secret "webhook-server-cert" not found Feb 28 04:50:32 crc kubenswrapper[5014]: I0228 04:50:32.927582 5014 scope.go:117] "RemoveContainer" containerID="ddaf4450281323c2e85864564e21a30cc53471b4aec6c913c6d11cbc3f8658d9" Feb 28 04:50:37 crc kubenswrapper[5014]: I0228 04:50:37.625905 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0535be64-bda6-4b55-9eb1-fe5a86d3cae8-cert\") pod \"infra-operator-controller-manager-786bd545f6-8hp88\" (UID: \"0535be64-bda6-4b55-9eb1-fe5a86d3cae8\") " pod="openstack-operators/infra-operator-controller-manager-786bd545f6-8hp88" Feb 28 04:50:37 crc kubenswrapper[5014]: E0228 04:50:37.626113 5014 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 28 04:50:37 crc kubenswrapper[5014]: E0228 04:50:37.626558 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0535be64-bda6-4b55-9eb1-fe5a86d3cae8-cert podName:0535be64-bda6-4b55-9eb1-fe5a86d3cae8 nodeName:}" failed. No retries permitted until 2026-02-28 04:50:53.626539589 +0000 UTC m=+1042.296665499 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/0535be64-bda6-4b55-9eb1-fe5a86d3cae8-cert") pod "infra-operator-controller-manager-786bd545f6-8hp88" (UID: "0535be64-bda6-4b55-9eb1-fe5a86d3cae8") : secret "infra-operator-webhook-server-cert" not found Feb 28 04:50:38 crc kubenswrapper[5014]: I0228 04:50:38.133906 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c84fa60-3777-4544-84ce-abc199e9df18-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj\" (UID: \"7c84fa60-3777-4544-84ce-abc199e9df18\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" Feb 28 04:50:38 crc kubenswrapper[5014]: E0228 04:50:38.134113 5014 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 04:50:38 crc kubenswrapper[5014]: E0228 04:50:38.134243 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7c84fa60-3777-4544-84ce-abc199e9df18-cert podName:7c84fa60-3777-4544-84ce-abc199e9df18 nodeName:}" failed. No retries permitted until 2026-02-28 04:50:54.134215607 +0000 UTC m=+1042.804341517 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7c84fa60-3777-4544-84ce-abc199e9df18-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" (UID: "7c84fa60-3777-4544-84ce-abc199e9df18") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 28 04:50:38 crc kubenswrapper[5014]: I0228 04:50:38.442936 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-metrics-certs\") pod \"openstack-operator-controller-manager-76974fc5d7-9d7k5\" (UID: \"b65e9823-17a7-42da-9191-af1db70355b9\") " pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:50:38 crc kubenswrapper[5014]: E0228 04:50:38.443184 5014 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 28 04:50:38 crc kubenswrapper[5014]: E0228 04:50:38.443245 5014 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 28 04:50:38 crc kubenswrapper[5014]: E0228 04:50:38.443268 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-webhook-certs podName:b65e9823-17a7-42da-9191-af1db70355b9 nodeName:}" failed. No retries permitted until 2026-02-28 04:50:54.443243004 +0000 UTC m=+1043.113368914 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-webhook-certs") pod "openstack-operator-controller-manager-76974fc5d7-9d7k5" (UID: "b65e9823-17a7-42da-9191-af1db70355b9") : secret "webhook-server-cert" not found Feb 28 04:50:38 crc kubenswrapper[5014]: E0228 04:50:38.443404 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-metrics-certs podName:b65e9823-17a7-42da-9191-af1db70355b9 nodeName:}" failed. No retries permitted until 2026-02-28 04:50:54.443373158 +0000 UTC m=+1043.113499108 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-metrics-certs") pod "openstack-operator-controller-manager-76974fc5d7-9d7k5" (UID: "b65e9823-17a7-42da-9191-af1db70355b9") : secret "metrics-server-cert" not found Feb 28 04:50:38 crc kubenswrapper[5014]: I0228 04:50:38.443449 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-webhook-certs\") pod \"openstack-operator-controller-manager-76974fc5d7-9d7k5\" (UID: \"b65e9823-17a7-42da-9191-af1db70355b9\") " pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.719864 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-gm8rn" event={"ID":"5189b3c2-1b93-432b-b1a3-dc579ef2abb6","Type":"ContainerStarted","Data":"2399c970d10c2e9850c7a6293014aec477ba90f60b3d4ad96ddd6a337c1e18c7"} Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.720452 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-gm8rn" Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.721959 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-n9r5r" event={"ID":"385767a3-7908-4f17-9f63-ea25c784c715","Type":"ContainerStarted","Data":"d2a6b69962d302a898c8a9da0851a010041300c25ca5a48a7baa47a325c85145"} Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.722042 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-n9r5r" Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.723360 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-ppf6c" event={"ID":"5b9d913b-e0e8-42f5-8d98-60fd3c219ff8","Type":"ContainerStarted","Data":"c9cbb954c4d5380352bdc1107d62ecf7942b716dbfb590f383548542ac080386"} Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.723711 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-ppf6c" Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.726080 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-clg6t" event={"ID":"d9ccc996-b3d9-44f1-8a6e-c58517885a7c","Type":"ContainerStarted","Data":"ddf913f044fdf36c149043f5e789201cda46e48e983e6a31a67a358d4d642782"} Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.726287 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-clg6t" Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.727588 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-587tn" event={"ID":"52707aa4-b40d-4046-a721-e3b31a1f9648","Type":"ContainerStarted","Data":"996b23b9c5bc271b25e14d3fb5530e0820c05264395bb80e3d7862af7d68c472"} Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.727700 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-587tn" Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.729085 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-s4j6f" event={"ID":"5f8b5a91-a57a-4679-a625-007592105038","Type":"ContainerStarted","Data":"48e2e2ccd503bd219f51a6809a8ff22a4973d775355051f21b37983695713316"} Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.729153 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-s4j6f" Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.731907 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-pl8nn" event={"ID":"895709de-d62e-4101-8294-d73238790d9c","Type":"ContainerStarted","Data":"acbf7af01bb177c99b06753dd75be7c2b19329d76fd44e55d01e38432d4bf920"} Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.731963 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-pl8nn" Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.733716 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-cfb47" event={"ID":"42fc68c6-e92f-4449-9398-518f904c58fb","Type":"ContainerStarted","Data":"eb6c4a3eb6596738aaa2991f4aff868adf1f90771f21b6ea87edb2d59801aba4"} Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.733774 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-cfb47" Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.735211 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-rcb7d" event={"ID":"d56f3210-6165-4bd1-b2e0-d8eb94b370a9","Type":"ContainerStarted","Data":"9660718bf5ebca285fc90e35ff7f761adf0c99fcdb64bba13d1026c9af0349b9"} Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.735617 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-rcb7d" Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.737680 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-2srvx" event={"ID":"9fe3aab0-3f3b-4fb3-a5da-2206ba55e813","Type":"ContainerStarted","Data":"c360cfed37df4edcee05918e4a50e5e227984b3a433eee922ea16855fe520a16"} Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.738055 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-2srvx" Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.743510 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-snccq" event={"ID":"f254469c-2cb3-4f38-8c52-960aa17d27fe","Type":"ContainerStarted","Data":"54a67badf6f638753bb37f3cfd2b7f9b229f3a41a0367f156ddbf7c71f3feaf7"} Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.743789 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-snccq" Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.745133 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-gjdqb" event={"ID":"07f212b7-6aea-4a43-95fa-4637b6dc1d87","Type":"ContainerStarted","Data":"9ff344ce837b6f208064d63f451f40dd5af2155837b6ad2b9884d6aeb260acce"} Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.745519 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-gjdqb" Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.746575 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-975zn" event={"ID":"f229b3d6-46dd-42ab-bb96-c207b02b35d0","Type":"ContainerStarted","Data":"8b3f4752dd1c6488283071f01d28d80596da620d9ac17404db8c69544dc21418"} Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.746988 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-975zn" Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.756680 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c4ptt" event={"ID":"90ad3ca4-2470-4ab2-9e22-17db53a7237d","Type":"ContainerStarted","Data":"ace57e61789440618353dc3378101c228fa83f95eaa0a4f6563e6cad7e3984f1"} Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.762000 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-pg7jw" event={"ID":"67e3c7dc-a78f-4039-b326-93795dd322ca","Type":"ContainerStarted","Data":"19973a11a94a9cbb4c2d13edf3d4ea77c3aae2e1ce285693dab89c38eb334a5e"} Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.762591 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-pg7jw" Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.764976 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-p2g4k" event={"ID":"7dfedb71-1284-4e5c-826d-efb134b34cdb","Type":"ContainerStarted","Data":"3de87fcbaf2fcd95b254a5af6d84db81ac73e151013f155595e52082c9313347"} Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.765516 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-p2g4k" Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.766931 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-82d7x" event={"ID":"1089c9f7-0d91-4639-9890-c41acc881797","Type":"ContainerStarted","Data":"93e887f0a05dcbea7416e36ed488d7c37e4f3ec7f00421778e5bb20ea5eca8f2"} Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.767390 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-82d7x" Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.769694 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-5t42k" event={"ID":"f734a97b-b94d-4132-a426-15111b3fc207","Type":"ContainerStarted","Data":"2e615e2072818c984e08978c1a99fd06abf228e3af735de7e1eeac2469869939"} Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.770282 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-5t42k" Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.771943 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-7bmg5" event={"ID":"dd26043c-48bc-4202-8266-d2590b6530e3","Type":"ContainerStarted","Data":"725e4fbeca60c6dc305ee538405e6e5a0f556966309a33b6e3c54f69c330ff96"} Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.772435 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-7bmg5" Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.778342 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-xpd29" event={"ID":"f5555801-1739-45d3-946f-3b731b87c593","Type":"ContainerStarted","Data":"c1827750ee4b46e56cb22d484146040cb752d67dd77f6338a187b9fd421e76bc"} Feb 28 04:50:44 crc kubenswrapper[5014]: I0228 04:50:44.778992 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54688575f-xpd29" Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.060491 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-gm8rn" podStartSLOduration=8.868446896 podStartE2EDuration="24.060476188s" podCreationTimestamp="2026-02-28 04:50:21 +0000 UTC" firstStartedPulling="2026-02-28 04:50:23.881335554 +0000 UTC m=+1012.551461464" lastFinishedPulling="2026-02-28 04:50:39.073364846 +0000 UTC m=+1027.743490756" observedRunningTime="2026-02-28 04:50:44.88939712 +0000 UTC m=+1033.559523030" watchObservedRunningTime="2026-02-28 04:50:45.060476188 +0000 UTC m=+1033.730602098" Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.169272 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-s4j6f" podStartSLOduration=8.95902519 podStartE2EDuration="24.169253182s" podCreationTimestamp="2026-02-28 04:50:21 +0000 UTC" firstStartedPulling="2026-02-28 04:50:23.862848936 +0000 UTC m=+1012.532974846" lastFinishedPulling="2026-02-28 04:50:39.073076928 +0000 UTC m=+1027.743202838" observedRunningTime="2026-02-28 04:50:45.078556575 +0000 UTC m=+1033.748682485" watchObservedRunningTime="2026-02-28 04:50:45.169253182 +0000 UTC m=+1033.839379092" Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.214255 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-rcb7d" podStartSLOduration=5.491661526 podStartE2EDuration="24.214233156s" podCreationTimestamp="2026-02-28 04:50:21 +0000 UTC" firstStartedPulling="2026-02-28 04:50:23.865619361 +0000 UTC m=+1012.535745271" lastFinishedPulling="2026-02-28 04:50:42.588190991 +0000 UTC m=+1031.258316901" observedRunningTime="2026-02-28 04:50:45.166143738 +0000 UTC m=+1033.836269648" watchObservedRunningTime="2026-02-28 04:50:45.214233156 +0000 UTC m=+1033.884359066" Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.214962 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-pg7jw" podStartSLOduration=7.281726096 podStartE2EDuration="24.214955076s" podCreationTimestamp="2026-02-28 04:50:21 +0000 UTC" firstStartedPulling="2026-02-28 04:50:23.795665524 +0000 UTC m=+1012.465791444" lastFinishedPulling="2026-02-28 04:50:40.728894514 +0000 UTC m=+1029.399020424" observedRunningTime="2026-02-28 04:50:45.214798632 +0000 UTC m=+1033.884924542" watchObservedRunningTime="2026-02-28 04:50:45.214955076 +0000 UTC m=+1033.885080986" Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.241348 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-82d7x" podStartSLOduration=3.9239445870000003 podStartE2EDuration="23.241331127s" podCreationTimestamp="2026-02-28 04:50:22 +0000 UTC" firstStartedPulling="2026-02-28 04:50:23.900738058 +0000 UTC m=+1012.570863968" lastFinishedPulling="2026-02-28 04:50:43.218124598 +0000 UTC m=+1031.888250508" observedRunningTime="2026-02-28 04:50:45.238152041 +0000 UTC m=+1033.908277951" watchObservedRunningTime="2026-02-28 04:50:45.241331127 +0000 UTC m=+1033.911457037" Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.274337 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-54688575f-xpd29" podStartSLOduration=10.457169322 podStartE2EDuration="24.274316607s" podCreationTimestamp="2026-02-28 04:50:21 +0000 UTC" firstStartedPulling="2026-02-28 04:50:23.375648341 +0000 UTC m=+1012.045774241" lastFinishedPulling="2026-02-28 04:50:37.192795606 +0000 UTC m=+1025.862921526" observedRunningTime="2026-02-28 04:50:45.27108658 +0000 UTC m=+1033.941212490" watchObservedRunningTime="2026-02-28 04:50:45.274316607 +0000 UTC m=+1033.944442517" Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.339100 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-ppf6c" podStartSLOduration=9.826393062 podStartE2EDuration="24.339080394s" podCreationTimestamp="2026-02-28 04:50:21 +0000 UTC" firstStartedPulling="2026-02-28 04:50:23.876900045 +0000 UTC m=+1012.547025955" lastFinishedPulling="2026-02-28 04:50:38.389587377 +0000 UTC m=+1027.059713287" observedRunningTime="2026-02-28 04:50:45.306419613 +0000 UTC m=+1033.976545523" watchObservedRunningTime="2026-02-28 04:50:45.339080394 +0000 UTC m=+1034.009206304" Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.340862 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-gjdqb" podStartSLOduration=7.489686226 podStartE2EDuration="24.340851323s" podCreationTimestamp="2026-02-28 04:50:21 +0000 UTC" firstStartedPulling="2026-02-28 04:50:23.877025999 +0000 UTC m=+1012.547151909" lastFinishedPulling="2026-02-28 04:50:40.728191096 +0000 UTC m=+1029.398317006" observedRunningTime="2026-02-28 04:50:45.339642549 +0000 UTC m=+1034.009768469" watchObservedRunningTime="2026-02-28 04:50:45.340851323 +0000 UTC m=+1034.010977233" Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.421753 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-587tn" podStartSLOduration=7.599857218 podStartE2EDuration="24.421733104s" podCreationTimestamp="2026-02-28 04:50:21 +0000 UTC" firstStartedPulling="2026-02-28 04:50:23.885125707 +0000 UTC m=+1012.555251617" lastFinishedPulling="2026-02-28 04:50:40.707001593 +0000 UTC m=+1029.377127503" observedRunningTime="2026-02-28 04:50:45.382568928 +0000 UTC m=+1034.052694838" watchObservedRunningTime="2026-02-28 04:50:45.421733104 +0000 UTC m=+1034.091859014" Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.423043 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-7bmg5" podStartSLOduration=10.445256711 podStartE2EDuration="24.42303768s" podCreationTimestamp="2026-02-28 04:50:21 +0000 UTC" firstStartedPulling="2026-02-28 04:50:23.364374387 +0000 UTC m=+1012.034500297" lastFinishedPulling="2026-02-28 04:50:37.342155336 +0000 UTC m=+1026.012281266" observedRunningTime="2026-02-28 04:50:45.419101544 +0000 UTC m=+1034.089227454" watchObservedRunningTime="2026-02-28 04:50:45.42303768 +0000 UTC m=+1034.093163590" Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.452549 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-pl8nn" podStartSLOduration=7.587292508 podStartE2EDuration="24.452531895s" podCreationTimestamp="2026-02-28 04:50:21 +0000 UTC" firstStartedPulling="2026-02-28 04:50:23.86337912 +0000 UTC m=+1012.533505030" lastFinishedPulling="2026-02-28 04:50:40.728618507 +0000 UTC m=+1029.398744417" observedRunningTime="2026-02-28 04:50:45.447735737 +0000 UTC m=+1034.117861647" watchObservedRunningTime="2026-02-28 04:50:45.452531895 +0000 UTC m=+1034.122657795" Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.482163 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-n9r5r" podStartSLOduration=12.102448314 podStartE2EDuration="24.482147855s" podCreationTimestamp="2026-02-28 04:50:21 +0000 UTC" firstStartedPulling="2026-02-28 04:50:23.055531954 +0000 UTC m=+1011.725657864" lastFinishedPulling="2026-02-28 04:50:35.435231495 +0000 UTC m=+1024.105357405" observedRunningTime="2026-02-28 04:50:45.478721663 +0000 UTC m=+1034.148847573" watchObservedRunningTime="2026-02-28 04:50:45.482147855 +0000 UTC m=+1034.152273765" Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.505918 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-cfb47" podStartSLOduration=5.293801627 podStartE2EDuration="24.505902496s" podCreationTimestamp="2026-02-28 04:50:21 +0000 UTC" firstStartedPulling="2026-02-28 04:50:23.377923412 +0000 UTC m=+1012.048049312" lastFinishedPulling="2026-02-28 04:50:42.590024271 +0000 UTC m=+1031.260150181" observedRunningTime="2026-02-28 04:50:45.500831869 +0000 UTC m=+1034.170957779" watchObservedRunningTime="2026-02-28 04:50:45.505902496 +0000 UTC m=+1034.176028406" Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.540020 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-clg6t" podStartSLOduration=4.255487475 podStartE2EDuration="23.540004597s" podCreationTimestamp="2026-02-28 04:50:22 +0000 UTC" firstStartedPulling="2026-02-28 04:50:23.930630115 +0000 UTC m=+1012.600756025" lastFinishedPulling="2026-02-28 04:50:43.215147237 +0000 UTC m=+1031.885273147" observedRunningTime="2026-02-28 04:50:45.535218897 +0000 UTC m=+1034.205344807" watchObservedRunningTime="2026-02-28 04:50:45.540004597 +0000 UTC m=+1034.210130507" Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.555388 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-snccq" podStartSLOduration=5.24828341 podStartE2EDuration="24.555370261s" podCreationTimestamp="2026-02-28 04:50:21 +0000 UTC" firstStartedPulling="2026-02-28 04:50:23.903738469 +0000 UTC m=+1012.573864379" lastFinishedPulling="2026-02-28 04:50:43.21082532 +0000 UTC m=+1031.880951230" observedRunningTime="2026-02-28 04:50:45.555218617 +0000 UTC m=+1034.225344527" watchObservedRunningTime="2026-02-28 04:50:45.555370261 +0000 UTC m=+1034.225496171" Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.578616 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-5t42k" podStartSLOduration=4.552842424 podStartE2EDuration="24.578593987s" podCreationTimestamp="2026-02-28 04:50:21 +0000 UTC" firstStartedPulling="2026-02-28 04:50:23.930900042 +0000 UTC m=+1012.601025952" lastFinishedPulling="2026-02-28 04:50:43.956651605 +0000 UTC m=+1032.626777515" observedRunningTime="2026-02-28 04:50:45.577269882 +0000 UTC m=+1034.247395792" watchObservedRunningTime="2026-02-28 04:50:45.578593987 +0000 UTC m=+1034.248719897" Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.602669 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-2srvx" podStartSLOduration=12.090154822 podStartE2EDuration="24.602644276s" podCreationTimestamp="2026-02-28 04:50:21 +0000 UTC" firstStartedPulling="2026-02-28 04:50:23.859990089 +0000 UTC m=+1012.530115999" lastFinishedPulling="2026-02-28 04:50:36.372479533 +0000 UTC m=+1025.042605453" observedRunningTime="2026-02-28 04:50:45.596335766 +0000 UTC m=+1034.266461676" watchObservedRunningTime="2026-02-28 04:50:45.602644276 +0000 UTC m=+1034.272770186" Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.627360 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-975zn" podStartSLOduration=4.941376711 podStartE2EDuration="23.627346183s" podCreationTimestamp="2026-02-28 04:50:22 +0000 UTC" firstStartedPulling="2026-02-28 04:50:23.930690417 +0000 UTC m=+1012.600816327" lastFinishedPulling="2026-02-28 04:50:42.616659889 +0000 UTC m=+1031.286785799" observedRunningTime="2026-02-28 04:50:45.624738663 +0000 UTC m=+1034.294864573" watchObservedRunningTime="2026-02-28 04:50:45.627346183 +0000 UTC m=+1034.297472093" Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.651160 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-p2g4k" podStartSLOduration=12.847998499 podStartE2EDuration="24.651134384s" podCreationTimestamp="2026-02-28 04:50:21 +0000 UTC" firstStartedPulling="2026-02-28 04:50:22.9145641 +0000 UTC m=+1011.584690010" lastFinishedPulling="2026-02-28 04:50:34.717699985 +0000 UTC m=+1023.387825895" observedRunningTime="2026-02-28 04:50:45.643056936 +0000 UTC m=+1034.313182856" watchObservedRunningTime="2026-02-28 04:50:45.651134384 +0000 UTC m=+1034.321260304" Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.665469 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-c4ptt" podStartSLOduration=3.617307865 podStartE2EDuration="23.665448271s" podCreationTimestamp="2026-02-28 04:50:22 +0000 UTC" firstStartedPulling="2026-02-28 04:50:23.953313007 +0000 UTC m=+1012.623438917" lastFinishedPulling="2026-02-28 04:50:44.001453373 +0000 UTC m=+1032.671579323" observedRunningTime="2026-02-28 04:50:45.657862407 +0000 UTC m=+1034.327988317" watchObservedRunningTime="2026-02-28 04:50:45.665448271 +0000 UTC m=+1034.335574191" Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.706468 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.706528 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.706571 5014 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.710208 5014 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cf1c2df486dbe48ee5a602ed54854b395ec2709d14f3810f6a23ce669b21c259"} pod="openshift-machine-config-operator/machine-config-daemon-cct62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 04:50:45 crc kubenswrapper[5014]: I0228 04:50:45.710273 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" containerID="cri-o://cf1c2df486dbe48ee5a602ed54854b395ec2709d14f3810f6a23ce669b21c259" gracePeriod=600 Feb 28 04:50:46 crc kubenswrapper[5014]: I0228 04:50:46.791263 5014 generic.go:334] "Generic (PLEG): container finished" podID="6aad0009-d904-48f8-8e30-82205907ece1" containerID="cf1c2df486dbe48ee5a602ed54854b395ec2709d14f3810f6a23ce669b21c259" exitCode=0 Feb 28 04:50:46 crc kubenswrapper[5014]: I0228 04:50:46.792166 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerDied","Data":"cf1c2df486dbe48ee5a602ed54854b395ec2709d14f3810f6a23ce669b21c259"} Feb 28 04:50:46 crc kubenswrapper[5014]: I0228 04:50:46.792189 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerStarted","Data":"9fe0724568f1359a83a127eb5109a6ee8f87dacb3ce893d1b36328a0a6724e45"} Feb 28 04:50:46 crc kubenswrapper[5014]: I0228 04:50:46.792204 5014 scope.go:117] "RemoveContainer" containerID="3c623acb0fdab16e3036395527958cd8d0812619f2c3f18a285c60873b1031aa" Feb 28 04:50:51 crc kubenswrapper[5014]: I0228 04:50:51.958977 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-p2g4k" Feb 28 04:50:51 crc kubenswrapper[5014]: I0228 04:50:51.973294 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-n9r5r" Feb 28 04:50:52 crc kubenswrapper[5014]: I0228 04:50:52.003110 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-587tn" Feb 28 04:50:52 crc kubenswrapper[5014]: I0228 04:50:52.058267 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-5t42k" Feb 28 04:50:52 crc kubenswrapper[5014]: I0228 04:50:52.094445 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-2srvx" Feb 28 04:50:52 crc kubenswrapper[5014]: I0228 04:50:52.118933 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-ppf6c" Feb 28 04:50:52 crc kubenswrapper[5014]: I0228 04:50:52.119725 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7c789f89c6-cfb47" Feb 28 04:50:52 crc kubenswrapper[5014]: I0228 04:50:52.155275 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-7bmg5" Feb 28 04:50:52 crc kubenswrapper[5014]: I0228 04:50:52.169447 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-gm8rn" Feb 28 04:50:52 crc kubenswrapper[5014]: I0228 04:50:52.229957 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-7b6bfb6475-s4j6f" Feb 28 04:50:52 crc kubenswrapper[5014]: I0228 04:50:52.280043 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54688575f-xpd29" Feb 28 04:50:52 crc kubenswrapper[5014]: I0228 04:50:52.460447 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-rcb7d" Feb 28 04:50:52 crc kubenswrapper[5014]: I0228 04:50:52.515828 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-pg7jw" Feb 28 04:50:52 crc kubenswrapper[5014]: I0228 04:50:52.559898 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-pl8nn" Feb 28 04:50:52 crc kubenswrapper[5014]: I0228 04:50:52.574734 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-gjdqb" Feb 28 04:50:52 crc kubenswrapper[5014]: I0228 04:50:52.678551 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-snccq" Feb 28 04:50:52 crc kubenswrapper[5014]: I0228 04:50:52.801695 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5fdb694969-82d7x" Feb 28 04:50:52 crc kubenswrapper[5014]: I0228 04:50:52.855108 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-clg6t" Feb 28 04:50:53 crc kubenswrapper[5014]: I0228 04:50:53.089150 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-975zn" Feb 28 04:50:53 crc kubenswrapper[5014]: I0228 04:50:53.673391 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0535be64-bda6-4b55-9eb1-fe5a86d3cae8-cert\") pod \"infra-operator-controller-manager-786bd545f6-8hp88\" (UID: \"0535be64-bda6-4b55-9eb1-fe5a86d3cae8\") " pod="openstack-operators/infra-operator-controller-manager-786bd545f6-8hp88" Feb 28 04:50:53 crc kubenswrapper[5014]: I0228 04:50:53.680043 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/0535be64-bda6-4b55-9eb1-fe5a86d3cae8-cert\") pod \"infra-operator-controller-manager-786bd545f6-8hp88\" (UID: \"0535be64-bda6-4b55-9eb1-fe5a86d3cae8\") " pod="openstack-operators/infra-operator-controller-manager-786bd545f6-8hp88" Feb 28 04:50:53 crc kubenswrapper[5014]: I0228 04:50:53.920968 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-z4rh5" Feb 28 04:50:53 crc kubenswrapper[5014]: I0228 04:50:53.929021 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-786bd545f6-8hp88" Feb 28 04:50:54 crc kubenswrapper[5014]: I0228 04:50:54.186673 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c84fa60-3777-4544-84ce-abc199e9df18-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj\" (UID: \"7c84fa60-3777-4544-84ce-abc199e9df18\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" Feb 28 04:50:54 crc kubenswrapper[5014]: I0228 04:50:54.204861 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7c84fa60-3777-4544-84ce-abc199e9df18-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj\" (UID: \"7c84fa60-3777-4544-84ce-abc199e9df18\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" Feb 28 04:50:54 crc kubenswrapper[5014]: I0228 04:50:54.285729 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-lrpsj" Feb 28 04:50:54 crc kubenswrapper[5014]: I0228 04:50:54.296487 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" Feb 28 04:50:54 crc kubenswrapper[5014]: I0228 04:50:54.417065 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-786bd545f6-8hp88"] Feb 28 04:50:54 crc kubenswrapper[5014]: I0228 04:50:54.508864 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-metrics-certs\") pod \"openstack-operator-controller-manager-76974fc5d7-9d7k5\" (UID: \"b65e9823-17a7-42da-9191-af1db70355b9\") " pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:50:54 crc kubenswrapper[5014]: I0228 04:50:54.508949 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-webhook-certs\") pod \"openstack-operator-controller-manager-76974fc5d7-9d7k5\" (UID: \"b65e9823-17a7-42da-9191-af1db70355b9\") " pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:50:54 crc kubenswrapper[5014]: I0228 04:50:54.512107 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-webhook-certs\") pod \"openstack-operator-controller-manager-76974fc5d7-9d7k5\" (UID: \"b65e9823-17a7-42da-9191-af1db70355b9\") " pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:50:54 crc kubenswrapper[5014]: I0228 04:50:54.524047 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b65e9823-17a7-42da-9191-af1db70355b9-metrics-certs\") pod \"openstack-operator-controller-manager-76974fc5d7-9d7k5\" (UID: \"b65e9823-17a7-42da-9191-af1db70355b9\") " pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:50:54 crc kubenswrapper[5014]: I0228 04:50:54.646868 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj"] Feb 28 04:50:54 crc kubenswrapper[5014]: W0228 04:50:54.652838 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c84fa60_3777_4544_84ce_abc199e9df18.slice/crio-fa12133d99a2f5a0af77cd54c29dd88fe1168d99cade2af6f60951b67bc98614 WatchSource:0}: Error finding container fa12133d99a2f5a0af77cd54c29dd88fe1168d99cade2af6f60951b67bc98614: Status 404 returned error can't find the container with id fa12133d99a2f5a0af77cd54c29dd88fe1168d99cade2af6f60951b67bc98614 Feb 28 04:50:54 crc kubenswrapper[5014]: I0228 04:50:54.666258 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-zfp8r" Feb 28 04:50:54 crc kubenswrapper[5014]: I0228 04:50:54.674418 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:50:54 crc kubenswrapper[5014]: I0228 04:50:54.870527 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-786bd545f6-8hp88" event={"ID":"0535be64-bda6-4b55-9eb1-fe5a86d3cae8","Type":"ContainerStarted","Data":"7e145879d37c269b1ae4b557b99ab24a73b42c878aba1392a451bbf30619c004"} Feb 28 04:50:54 crc kubenswrapper[5014]: I0228 04:50:54.873202 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" event={"ID":"7c84fa60-3777-4544-84ce-abc199e9df18","Type":"ContainerStarted","Data":"fa12133d99a2f5a0af77cd54c29dd88fe1168d99cade2af6f60951b67bc98614"} Feb 28 04:50:54 crc kubenswrapper[5014]: I0228 04:50:54.927054 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5"] Feb 28 04:50:54 crc kubenswrapper[5014]: W0228 04:50:54.938082 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb65e9823_17a7_42da_9191_af1db70355b9.slice/crio-3a526aaf928410d8e05a14311c1d1b546cb15342e29f70aa0a7c4b92d8f36aae WatchSource:0}: Error finding container 3a526aaf928410d8e05a14311c1d1b546cb15342e29f70aa0a7c4b92d8f36aae: Status 404 returned error can't find the container with id 3a526aaf928410d8e05a14311c1d1b546cb15342e29f70aa0a7c4b92d8f36aae Feb 28 04:50:55 crc kubenswrapper[5014]: I0228 04:50:55.903139 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" event={"ID":"b65e9823-17a7-42da-9191-af1db70355b9","Type":"ContainerStarted","Data":"7d60a296db5b6fc113e84c2b79f8bcdd08ea1b7bab03003236271bfa34dbdb5b"} Feb 28 04:50:55 crc kubenswrapper[5014]: I0228 04:50:55.903729 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" event={"ID":"b65e9823-17a7-42da-9191-af1db70355b9","Type":"ContainerStarted","Data":"3a526aaf928410d8e05a14311c1d1b546cb15342e29f70aa0a7c4b92d8f36aae"} Feb 28 04:50:55 crc kubenswrapper[5014]: I0228 04:50:55.908657 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:50:55 crc kubenswrapper[5014]: I0228 04:50:55.944596 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" podStartSLOduration=33.944577717 podStartE2EDuration="33.944577717s" podCreationTimestamp="2026-02-28 04:50:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:50:55.937153627 +0000 UTC m=+1044.607279547" watchObservedRunningTime="2026-02-28 04:50:55.944577717 +0000 UTC m=+1044.614703627" Feb 28 04:50:59 crc kubenswrapper[5014]: I0228 04:50:59.936362 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-786bd545f6-8hp88" event={"ID":"0535be64-bda6-4b55-9eb1-fe5a86d3cae8","Type":"ContainerStarted","Data":"9fc227277a7605892bc7b07e6173d35422148b05c10bcc8ade0452a3ca5d653f"} Feb 28 04:50:59 crc kubenswrapper[5014]: I0228 04:50:59.937775 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-786bd545f6-8hp88" Feb 28 04:50:59 crc kubenswrapper[5014]: I0228 04:50:59.955178 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-786bd545f6-8hp88" podStartSLOduration=33.793790656 podStartE2EDuration="38.955160117s" podCreationTimestamp="2026-02-28 04:50:21 +0000 UTC" firstStartedPulling="2026-02-28 04:50:54.438668216 +0000 UTC m=+1043.108794126" lastFinishedPulling="2026-02-28 04:50:59.600037677 +0000 UTC m=+1048.270163587" observedRunningTime="2026-02-28 04:50:59.950532673 +0000 UTC m=+1048.620658583" watchObservedRunningTime="2026-02-28 04:50:59.955160117 +0000 UTC m=+1048.625286027" Feb 28 04:51:00 crc kubenswrapper[5014]: I0228 04:51:00.944333 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" event={"ID":"7c84fa60-3777-4544-84ce-abc199e9df18","Type":"ContainerStarted","Data":"067d53c5b28da3c774cec1eeff4ba70f35c1a36f177052d0a481821de2c49097"} Feb 28 04:51:00 crc kubenswrapper[5014]: I0228 04:51:00.944630 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" Feb 28 04:51:00 crc kubenswrapper[5014]: I0228 04:51:00.986224 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" podStartSLOduration=33.952886019 podStartE2EDuration="39.986205626s" podCreationTimestamp="2026-02-28 04:50:21 +0000 UTC" firstStartedPulling="2026-02-28 04:50:54.655003052 +0000 UTC m=+1043.325128962" lastFinishedPulling="2026-02-28 04:51:00.688322659 +0000 UTC m=+1049.358448569" observedRunningTime="2026-02-28 04:51:00.984332967 +0000 UTC m=+1049.654458877" watchObservedRunningTime="2026-02-28 04:51:00.986205626 +0000 UTC m=+1049.656331536" Feb 28 04:51:04 crc kubenswrapper[5014]: I0228 04:51:04.682554 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-76974fc5d7-9d7k5" Feb 28 04:51:13 crc kubenswrapper[5014]: I0228 04:51:13.939209 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-786bd545f6-8hp88" Feb 28 04:51:14 crc kubenswrapper[5014]: I0228 04:51:14.302506 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj" Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.311039 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-cb8zt"] Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.312987 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-cb8zt" Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.315330 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.315351 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-qnqx8" Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.315332 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.315389 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.328059 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-cb8zt"] Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.393898 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-wfl7r"] Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.395171 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-wfl7r" Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.400082 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-wfl7r"] Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.401193 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.469562 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flwk9\" (UniqueName: \"kubernetes.io/projected/9f7d64ae-72df-4502-91fc-2c9de87ee05f-kube-api-access-flwk9\") pod \"dnsmasq-dns-675f4bcbfc-cb8zt\" (UID: \"9f7d64ae-72df-4502-91fc-2c9de87ee05f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-cb8zt" Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.469694 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c140535a-737a-4dab-9b80-501657ca3921-config\") pod \"dnsmasq-dns-78dd6ddcc-wfl7r\" (UID: \"c140535a-737a-4dab-9b80-501657ca3921\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wfl7r" Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.469746 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c140535a-737a-4dab-9b80-501657ca3921-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-wfl7r\" (UID: \"c140535a-737a-4dab-9b80-501657ca3921\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wfl7r" Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.469772 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh8ng\" (UniqueName: \"kubernetes.io/projected/c140535a-737a-4dab-9b80-501657ca3921-kube-api-access-mh8ng\") pod \"dnsmasq-dns-78dd6ddcc-wfl7r\" (UID: \"c140535a-737a-4dab-9b80-501657ca3921\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wfl7r" Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.469796 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f7d64ae-72df-4502-91fc-2c9de87ee05f-config\") pod \"dnsmasq-dns-675f4bcbfc-cb8zt\" (UID: \"9f7d64ae-72df-4502-91fc-2c9de87ee05f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-cb8zt" Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.570558 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c140535a-737a-4dab-9b80-501657ca3921-config\") pod \"dnsmasq-dns-78dd6ddcc-wfl7r\" (UID: \"c140535a-737a-4dab-9b80-501657ca3921\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wfl7r" Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.570626 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c140535a-737a-4dab-9b80-501657ca3921-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-wfl7r\" (UID: \"c140535a-737a-4dab-9b80-501657ca3921\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wfl7r" Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.570655 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh8ng\" (UniqueName: \"kubernetes.io/projected/c140535a-737a-4dab-9b80-501657ca3921-kube-api-access-mh8ng\") pod \"dnsmasq-dns-78dd6ddcc-wfl7r\" (UID: \"c140535a-737a-4dab-9b80-501657ca3921\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wfl7r" Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.570671 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f7d64ae-72df-4502-91fc-2c9de87ee05f-config\") pod \"dnsmasq-dns-675f4bcbfc-cb8zt\" (UID: \"9f7d64ae-72df-4502-91fc-2c9de87ee05f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-cb8zt" Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.570698 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flwk9\" (UniqueName: \"kubernetes.io/projected/9f7d64ae-72df-4502-91fc-2c9de87ee05f-kube-api-access-flwk9\") pod \"dnsmasq-dns-675f4bcbfc-cb8zt\" (UID: \"9f7d64ae-72df-4502-91fc-2c9de87ee05f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-cb8zt" Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.571536 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c140535a-737a-4dab-9b80-501657ca3921-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-wfl7r\" (UID: \"c140535a-737a-4dab-9b80-501657ca3921\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wfl7r" Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.571675 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c140535a-737a-4dab-9b80-501657ca3921-config\") pod \"dnsmasq-dns-78dd6ddcc-wfl7r\" (UID: \"c140535a-737a-4dab-9b80-501657ca3921\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wfl7r" Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.571680 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f7d64ae-72df-4502-91fc-2c9de87ee05f-config\") pod \"dnsmasq-dns-675f4bcbfc-cb8zt\" (UID: \"9f7d64ae-72df-4502-91fc-2c9de87ee05f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-cb8zt" Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.588099 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh8ng\" (UniqueName: \"kubernetes.io/projected/c140535a-737a-4dab-9b80-501657ca3921-kube-api-access-mh8ng\") pod \"dnsmasq-dns-78dd6ddcc-wfl7r\" (UID: \"c140535a-737a-4dab-9b80-501657ca3921\") " pod="openstack/dnsmasq-dns-78dd6ddcc-wfl7r" Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.589214 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flwk9\" (UniqueName: \"kubernetes.io/projected/9f7d64ae-72df-4502-91fc-2c9de87ee05f-kube-api-access-flwk9\") pod \"dnsmasq-dns-675f4bcbfc-cb8zt\" (UID: \"9f7d64ae-72df-4502-91fc-2c9de87ee05f\") " pod="openstack/dnsmasq-dns-675f4bcbfc-cb8zt" Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.637749 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-cb8zt" Feb 28 04:51:33 crc kubenswrapper[5014]: I0228 04:51:33.718777 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-wfl7r" Feb 28 04:51:34 crc kubenswrapper[5014]: I0228 04:51:34.128182 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-cb8zt"] Feb 28 04:51:34 crc kubenswrapper[5014]: I0228 04:51:34.161451 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-wfl7r"] Feb 28 04:51:34 crc kubenswrapper[5014]: I0228 04:51:34.365126 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-cb8zt" event={"ID":"9f7d64ae-72df-4502-91fc-2c9de87ee05f","Type":"ContainerStarted","Data":"de20b4a5c18221aa29c131700cd8bbf2e4d2df4a0900df1573644bef202c3963"} Feb 28 04:51:34 crc kubenswrapper[5014]: I0228 04:51:34.367298 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-wfl7r" event={"ID":"c140535a-737a-4dab-9b80-501657ca3921","Type":"ContainerStarted","Data":"33c83437c62e28e07f8d15d000f0ed8a382b0aef0b8f5d80642fcfa7ddea9b58"} Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.112098 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-cb8zt"] Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.129448 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-gn8d5"] Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.130586 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-gn8d5" Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.144830 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-gn8d5"] Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.216593 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79gjl\" (UniqueName: \"kubernetes.io/projected/6f39bd6f-3b06-4fbb-886d-b96e77209f53-kube-api-access-79gjl\") pod \"dnsmasq-dns-5ccc8479f9-gn8d5\" (UID: \"6f39bd6f-3b06-4fbb-886d-b96e77209f53\") " pod="openstack/dnsmasq-dns-5ccc8479f9-gn8d5" Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.216662 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f39bd6f-3b06-4fbb-886d-b96e77209f53-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-gn8d5\" (UID: \"6f39bd6f-3b06-4fbb-886d-b96e77209f53\") " pod="openstack/dnsmasq-dns-5ccc8479f9-gn8d5" Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.216687 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f39bd6f-3b06-4fbb-886d-b96e77209f53-config\") pod \"dnsmasq-dns-5ccc8479f9-gn8d5\" (UID: \"6f39bd6f-3b06-4fbb-886d-b96e77209f53\") " pod="openstack/dnsmasq-dns-5ccc8479f9-gn8d5" Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.318339 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79gjl\" (UniqueName: \"kubernetes.io/projected/6f39bd6f-3b06-4fbb-886d-b96e77209f53-kube-api-access-79gjl\") pod \"dnsmasq-dns-5ccc8479f9-gn8d5\" (UID: \"6f39bd6f-3b06-4fbb-886d-b96e77209f53\") " pod="openstack/dnsmasq-dns-5ccc8479f9-gn8d5" Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.318410 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f39bd6f-3b06-4fbb-886d-b96e77209f53-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-gn8d5\" (UID: \"6f39bd6f-3b06-4fbb-886d-b96e77209f53\") " pod="openstack/dnsmasq-dns-5ccc8479f9-gn8d5" Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.318437 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f39bd6f-3b06-4fbb-886d-b96e77209f53-config\") pod \"dnsmasq-dns-5ccc8479f9-gn8d5\" (UID: \"6f39bd6f-3b06-4fbb-886d-b96e77209f53\") " pod="openstack/dnsmasq-dns-5ccc8479f9-gn8d5" Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.319445 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f39bd6f-3b06-4fbb-886d-b96e77209f53-config\") pod \"dnsmasq-dns-5ccc8479f9-gn8d5\" (UID: \"6f39bd6f-3b06-4fbb-886d-b96e77209f53\") " pod="openstack/dnsmasq-dns-5ccc8479f9-gn8d5" Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.319508 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f39bd6f-3b06-4fbb-886d-b96e77209f53-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-gn8d5\" (UID: \"6f39bd6f-3b06-4fbb-886d-b96e77209f53\") " pod="openstack/dnsmasq-dns-5ccc8479f9-gn8d5" Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.340445 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79gjl\" (UniqueName: \"kubernetes.io/projected/6f39bd6f-3b06-4fbb-886d-b96e77209f53-kube-api-access-79gjl\") pod \"dnsmasq-dns-5ccc8479f9-gn8d5\" (UID: \"6f39bd6f-3b06-4fbb-886d-b96e77209f53\") " pod="openstack/dnsmasq-dns-5ccc8479f9-gn8d5" Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.416417 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-wfl7r"] Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.438589 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-66zrm"] Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.439692 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-66zrm" Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.447453 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-66zrm"] Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.465083 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-gn8d5" Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.521772 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6128fe0e-47ba-405d-b527-38b43d9d262c-config\") pod \"dnsmasq-dns-57d769cc4f-66zrm\" (UID: \"6128fe0e-47ba-405d-b527-38b43d9d262c\") " pod="openstack/dnsmasq-dns-57d769cc4f-66zrm" Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.522132 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6128fe0e-47ba-405d-b527-38b43d9d262c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-66zrm\" (UID: \"6128fe0e-47ba-405d-b527-38b43d9d262c\") " pod="openstack/dnsmasq-dns-57d769cc4f-66zrm" Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.522221 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-969tz\" (UniqueName: \"kubernetes.io/projected/6128fe0e-47ba-405d-b527-38b43d9d262c-kube-api-access-969tz\") pod \"dnsmasq-dns-57d769cc4f-66zrm\" (UID: \"6128fe0e-47ba-405d-b527-38b43d9d262c\") " pod="openstack/dnsmasq-dns-57d769cc4f-66zrm" Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.623158 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6128fe0e-47ba-405d-b527-38b43d9d262c-config\") pod \"dnsmasq-dns-57d769cc4f-66zrm\" (UID: \"6128fe0e-47ba-405d-b527-38b43d9d262c\") " pod="openstack/dnsmasq-dns-57d769cc4f-66zrm" Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.623214 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6128fe0e-47ba-405d-b527-38b43d9d262c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-66zrm\" (UID: \"6128fe0e-47ba-405d-b527-38b43d9d262c\") " pod="openstack/dnsmasq-dns-57d769cc4f-66zrm" Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.623318 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-969tz\" (UniqueName: \"kubernetes.io/projected/6128fe0e-47ba-405d-b527-38b43d9d262c-kube-api-access-969tz\") pod \"dnsmasq-dns-57d769cc4f-66zrm\" (UID: \"6128fe0e-47ba-405d-b527-38b43d9d262c\") " pod="openstack/dnsmasq-dns-57d769cc4f-66zrm" Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.624612 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6128fe0e-47ba-405d-b527-38b43d9d262c-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-66zrm\" (UID: \"6128fe0e-47ba-405d-b527-38b43d9d262c\") " pod="openstack/dnsmasq-dns-57d769cc4f-66zrm" Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.624838 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6128fe0e-47ba-405d-b527-38b43d9d262c-config\") pod \"dnsmasq-dns-57d769cc4f-66zrm\" (UID: \"6128fe0e-47ba-405d-b527-38b43d9d262c\") " pod="openstack/dnsmasq-dns-57d769cc4f-66zrm" Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.643120 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-969tz\" (UniqueName: \"kubernetes.io/projected/6128fe0e-47ba-405d-b527-38b43d9d262c-kube-api-access-969tz\") pod \"dnsmasq-dns-57d769cc4f-66zrm\" (UID: \"6128fe0e-47ba-405d-b527-38b43d9d262c\") " pod="openstack/dnsmasq-dns-57d769cc4f-66zrm" Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.759560 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-66zrm" Feb 28 04:51:36 crc kubenswrapper[5014]: I0228 04:51:36.967819 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-gn8d5"] Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.005096 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-66zrm"] Feb 28 04:51:37 crc kubenswrapper[5014]: W0228 04:51:37.009936 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6128fe0e_47ba_405d_b527_38b43d9d262c.slice/crio-d044c9d9c7f8b6e2edec0b6318ba8a150ee53a9c53e68088a6a0387f07eeffc3 WatchSource:0}: Error finding container d044c9d9c7f8b6e2edec0b6318ba8a150ee53a9c53e68088a6a0387f07eeffc3: Status 404 returned error can't find the container with id d044c9d9c7f8b6e2edec0b6318ba8a150ee53a9c53e68088a6a0387f07eeffc3 Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.270168 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.271294 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.273852 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.274073 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.274073 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.274111 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.274156 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.274220 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-679gc" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.274560 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.282138 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.408302 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-66zrm" event={"ID":"6128fe0e-47ba-405d-b527-38b43d9d262c","Type":"ContainerStarted","Data":"d044c9d9c7f8b6e2edec0b6318ba8a150ee53a9c53e68088a6a0387f07eeffc3"} Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.409641 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-gn8d5" event={"ID":"6f39bd6f-3b06-4fbb-886d-b96e77209f53","Type":"ContainerStarted","Data":"ce5befd8915dd191ead47241abc9ec6768015873fb0f92d256929f1d71a00e22"} Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.432935 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.432982 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.433010 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.433566 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.433603 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.433703 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.433757 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g85ss\" (UniqueName: \"kubernetes.io/projected/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-kube-api-access-g85ss\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.433784 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.433857 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.433942 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.434009 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.535893 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.535942 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.535981 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.536006 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.536046 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.536067 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.536087 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.536113 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.536132 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.536149 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.536164 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g85ss\" (UniqueName: \"kubernetes.io/projected/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-kube-api-access-g85ss\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.537742 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.539399 5014 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.539561 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.539826 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.540908 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.545181 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.545442 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.559504 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.559698 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.560421 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.562123 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.564544 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.564778 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.564963 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-tcg9x" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.565514 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.565726 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.565937 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.567726 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g85ss\" (UniqueName: \"kubernetes.io/projected/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-kube-api-access-g85ss\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.567773 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.568037 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.572020 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.595068 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.630755 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.739322 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/351fb773-0669-41c0-aee8-0469f34d64c9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.739375 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/351fb773-0669-41c0-aee8-0469f34d64c9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.739410 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/351fb773-0669-41c0-aee8-0469f34d64c9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.739432 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwbcj\" (UniqueName: \"kubernetes.io/projected/351fb773-0669-41c0-aee8-0469f34d64c9-kube-api-access-fwbcj\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.739457 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/351fb773-0669-41c0-aee8-0469f34d64c9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.739498 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/351fb773-0669-41c0-aee8-0469f34d64c9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.739516 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/351fb773-0669-41c0-aee8-0469f34d64c9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.739542 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/351fb773-0669-41c0-aee8-0469f34d64c9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.739592 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.739617 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/351fb773-0669-41c0-aee8-0469f34d64c9-config-data\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.739651 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/351fb773-0669-41c0-aee8-0469f34d64c9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.841139 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/351fb773-0669-41c0-aee8-0469f34d64c9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.841208 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/351fb773-0669-41c0-aee8-0469f34d64c9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.841231 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/351fb773-0669-41c0-aee8-0469f34d64c9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.841263 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/351fb773-0669-41c0-aee8-0469f34d64c9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.841307 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.841330 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/351fb773-0669-41c0-aee8-0469f34d64c9-config-data\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.841366 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/351fb773-0669-41c0-aee8-0469f34d64c9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.841407 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/351fb773-0669-41c0-aee8-0469f34d64c9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.841430 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/351fb773-0669-41c0-aee8-0469f34d64c9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.841450 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/351fb773-0669-41c0-aee8-0469f34d64c9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.841466 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwbcj\" (UniqueName: \"kubernetes.io/projected/351fb773-0669-41c0-aee8-0469f34d64c9-kube-api-access-fwbcj\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.841900 5014 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.842777 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/351fb773-0669-41c0-aee8-0469f34d64c9-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.843471 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/351fb773-0669-41c0-aee8-0469f34d64c9-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.844063 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/351fb773-0669-41c0-aee8-0469f34d64c9-config-data\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.844527 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/351fb773-0669-41c0-aee8-0469f34d64c9-server-conf\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.844622 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/351fb773-0669-41c0-aee8-0469f34d64c9-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.846236 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/351fb773-0669-41c0-aee8-0469f34d64c9-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.847974 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/351fb773-0669-41c0-aee8-0469f34d64c9-pod-info\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.848634 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/351fb773-0669-41c0-aee8-0469f34d64c9-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.849019 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/351fb773-0669-41c0-aee8-0469f34d64c9-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.863160 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwbcj\" (UniqueName: \"kubernetes.io/projected/351fb773-0669-41c0-aee8-0469f34d64c9-kube-api-access-fwbcj\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.866264 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " pod="openstack/rabbitmq-server-0" Feb 28 04:51:37 crc kubenswrapper[5014]: I0228 04:51:37.947299 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.665341 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.666436 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.670111 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.670349 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.672163 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-g6mjb" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.672382 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.674157 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.680780 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.755881 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c1c70607-6183-4835-9ce6-fe3ef0d2b6fb-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") " pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.755938 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c1c70607-6183-4835-9ce6-fe3ef0d2b6fb-config-data-default\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") " pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.756003 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1c70607-6183-4835-9ce6-fe3ef0d2b6fb-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") " pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.756206 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1c70607-6183-4835-9ce6-fe3ef0d2b6fb-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") " pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.756409 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1c70607-6183-4835-9ce6-fe3ef0d2b6fb-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") " pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.756496 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5xfz\" (UniqueName: \"kubernetes.io/projected/c1c70607-6183-4835-9ce6-fe3ef0d2b6fb-kube-api-access-f5xfz\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") " pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.756571 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c1c70607-6183-4835-9ce6-fe3ef0d2b6fb-kolla-config\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") " pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.756765 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") " pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.858967 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1c70607-6183-4835-9ce6-fe3ef0d2b6fb-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") " pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.860296 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1c70607-6183-4835-9ce6-fe3ef0d2b6fb-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") " pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.860474 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5xfz\" (UniqueName: \"kubernetes.io/projected/c1c70607-6183-4835-9ce6-fe3ef0d2b6fb-kube-api-access-f5xfz\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") " pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.860536 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c1c70607-6183-4835-9ce6-fe3ef0d2b6fb-kolla-config\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") " pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.860637 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") " pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.860705 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c1c70607-6183-4835-9ce6-fe3ef0d2b6fb-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") " pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.860729 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c1c70607-6183-4835-9ce6-fe3ef0d2b6fb-config-data-default\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") " pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.860849 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1c70607-6183-4835-9ce6-fe3ef0d2b6fb-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") " pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.861027 5014 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.861332 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/c1c70607-6183-4835-9ce6-fe3ef0d2b6fb-config-data-generated\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") " pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.862261 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/c1c70607-6183-4835-9ce6-fe3ef0d2b6fb-kolla-config\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") " pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.862645 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/c1c70607-6183-4835-9ce6-fe3ef0d2b6fb-config-data-default\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") " pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.863643 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1c70607-6183-4835-9ce6-fe3ef0d2b6fb-operator-scripts\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") " pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.866526 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1c70607-6183-4835-9ce6-fe3ef0d2b6fb-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") " pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.869365 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1c70607-6183-4835-9ce6-fe3ef0d2b6fb-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") " pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.878593 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5xfz\" (UniqueName: \"kubernetes.io/projected/c1c70607-6183-4835-9ce6-fe3ef0d2b6fb-kube-api-access-f5xfz\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") " pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.878655 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb\") " pod="openstack/openstack-galera-0" Feb 28 04:51:38 crc kubenswrapper[5014]: I0228 04:51:38.997384 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.022388 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.024068 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.031706 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.032076 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-xcbf6" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.032276 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.033443 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.035229 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.178321 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac71caa8-2f63-4b64-8d37-a1b364b62158-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") " pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.178395 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ac71caa8-2f63-4b64-8d37-a1b364b62158-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") " pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.178427 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ac71caa8-2f63-4b64-8d37-a1b364b62158-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") " pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.178483 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac71caa8-2f63-4b64-8d37-a1b364b62158-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") " pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.178512 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac71caa8-2f63-4b64-8d37-a1b364b62158-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") " pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.178544 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") " pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.178600 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cddf\" (UniqueName: \"kubernetes.io/projected/ac71caa8-2f63-4b64-8d37-a1b364b62158-kube-api-access-8cddf\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") " pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.178637 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ac71caa8-2f63-4b64-8d37-a1b364b62158-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") " pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.276056 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.277090 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.279159 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.279195 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-7xm6t" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.279266 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.279572 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cddf\" (UniqueName: \"kubernetes.io/projected/ac71caa8-2f63-4b64-8d37-a1b364b62158-kube-api-access-8cddf\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") " pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.279629 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ac71caa8-2f63-4b64-8d37-a1b364b62158-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") " pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.279681 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac71caa8-2f63-4b64-8d37-a1b364b62158-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") " pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.279704 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ac71caa8-2f63-4b64-8d37-a1b364b62158-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") " pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.279730 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ac71caa8-2f63-4b64-8d37-a1b364b62158-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") " pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.279771 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac71caa8-2f63-4b64-8d37-a1b364b62158-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") " pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.279795 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac71caa8-2f63-4b64-8d37-a1b364b62158-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") " pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.279853 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") " pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.280142 5014 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.280452 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/ac71caa8-2f63-4b64-8d37-a1b364b62158-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") " pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.280582 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ac71caa8-2f63-4b64-8d37-a1b364b62158-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") " pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.281437 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/ac71caa8-2f63-4b64-8d37-a1b364b62158-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") " pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.282432 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ac71caa8-2f63-4b64-8d37-a1b364b62158-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") " pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.296782 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac71caa8-2f63-4b64-8d37-a1b364b62158-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") " pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.296858 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/ac71caa8-2f63-4b64-8d37-a1b364b62158-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") " pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.302299 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.302953 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") " pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.329454 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cddf\" (UniqueName: \"kubernetes.io/projected/ac71caa8-2f63-4b64-8d37-a1b364b62158-kube-api-access-8cddf\") pod \"openstack-cell1-galera-0\" (UID: \"ac71caa8-2f63-4b64-8d37-a1b364b62158\") " pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.367522 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.380748 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1420f298-151a-48af-bdb2-a58d5143967c-kolla-config\") pod \"memcached-0\" (UID: \"1420f298-151a-48af-bdb2-a58d5143967c\") " pod="openstack/memcached-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.380853 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1420f298-151a-48af-bdb2-a58d5143967c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1420f298-151a-48af-bdb2-a58d5143967c\") " pod="openstack/memcached-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.380920 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1420f298-151a-48af-bdb2-a58d5143967c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1420f298-151a-48af-bdb2-a58d5143967c\") " pod="openstack/memcached-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.380938 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1420f298-151a-48af-bdb2-a58d5143967c-config-data\") pod \"memcached-0\" (UID: \"1420f298-151a-48af-bdb2-a58d5143967c\") " pod="openstack/memcached-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.380957 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccsrh\" (UniqueName: \"kubernetes.io/projected/1420f298-151a-48af-bdb2-a58d5143967c-kube-api-access-ccsrh\") pod \"memcached-0\" (UID: \"1420f298-151a-48af-bdb2-a58d5143967c\") " pod="openstack/memcached-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.482508 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1420f298-151a-48af-bdb2-a58d5143967c-kolla-config\") pod \"memcached-0\" (UID: \"1420f298-151a-48af-bdb2-a58d5143967c\") " pod="openstack/memcached-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.482627 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1420f298-151a-48af-bdb2-a58d5143967c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1420f298-151a-48af-bdb2-a58d5143967c\") " pod="openstack/memcached-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.482681 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1420f298-151a-48af-bdb2-a58d5143967c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1420f298-151a-48af-bdb2-a58d5143967c\") " pod="openstack/memcached-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.482864 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1420f298-151a-48af-bdb2-a58d5143967c-config-data\") pod \"memcached-0\" (UID: \"1420f298-151a-48af-bdb2-a58d5143967c\") " pod="openstack/memcached-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.482892 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccsrh\" (UniqueName: \"kubernetes.io/projected/1420f298-151a-48af-bdb2-a58d5143967c-kube-api-access-ccsrh\") pod \"memcached-0\" (UID: \"1420f298-151a-48af-bdb2-a58d5143967c\") " pod="openstack/memcached-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.483607 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/1420f298-151a-48af-bdb2-a58d5143967c-kolla-config\") pod \"memcached-0\" (UID: \"1420f298-151a-48af-bdb2-a58d5143967c\") " pod="openstack/memcached-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.483879 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1420f298-151a-48af-bdb2-a58d5143967c-config-data\") pod \"memcached-0\" (UID: \"1420f298-151a-48af-bdb2-a58d5143967c\") " pod="openstack/memcached-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.489636 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1420f298-151a-48af-bdb2-a58d5143967c-combined-ca-bundle\") pod \"memcached-0\" (UID: \"1420f298-151a-48af-bdb2-a58d5143967c\") " pod="openstack/memcached-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.497365 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/1420f298-151a-48af-bdb2-a58d5143967c-memcached-tls-certs\") pod \"memcached-0\" (UID: \"1420f298-151a-48af-bdb2-a58d5143967c\") " pod="openstack/memcached-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.508501 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccsrh\" (UniqueName: \"kubernetes.io/projected/1420f298-151a-48af-bdb2-a58d5143967c-kube-api-access-ccsrh\") pod \"memcached-0\" (UID: \"1420f298-151a-48af-bdb2-a58d5143967c\") " pod="openstack/memcached-0" Feb 28 04:51:40 crc kubenswrapper[5014]: I0228 04:51:40.686771 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 28 04:51:42 crc kubenswrapper[5014]: I0228 04:51:42.576813 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 04:51:42 crc kubenswrapper[5014]: I0228 04:51:42.577704 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 28 04:51:42 crc kubenswrapper[5014]: I0228 04:51:42.579855 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-f9mqt" Feb 28 04:51:42 crc kubenswrapper[5014]: I0228 04:51:42.583575 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 04:51:42 crc kubenswrapper[5014]: I0228 04:51:42.724601 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97n9h\" (UniqueName: \"kubernetes.io/projected/35f1a99d-7cdf-41d2-8106-e18f5660eb1b-kube-api-access-97n9h\") pod \"kube-state-metrics-0\" (UID: \"35f1a99d-7cdf-41d2-8106-e18f5660eb1b\") " pod="openstack/kube-state-metrics-0" Feb 28 04:51:42 crc kubenswrapper[5014]: I0228 04:51:42.825582 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97n9h\" (UniqueName: \"kubernetes.io/projected/35f1a99d-7cdf-41d2-8106-e18f5660eb1b-kube-api-access-97n9h\") pod \"kube-state-metrics-0\" (UID: \"35f1a99d-7cdf-41d2-8106-e18f5660eb1b\") " pod="openstack/kube-state-metrics-0" Feb 28 04:51:42 crc kubenswrapper[5014]: I0228 04:51:42.843787 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97n9h\" (UniqueName: \"kubernetes.io/projected/35f1a99d-7cdf-41d2-8106-e18f5660eb1b-kube-api-access-97n9h\") pod \"kube-state-metrics-0\" (UID: \"35f1a99d-7cdf-41d2-8106-e18f5660eb1b\") " pod="openstack/kube-state-metrics-0" Feb 28 04:51:42 crc kubenswrapper[5014]: I0228 04:51:42.898033 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.410658 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9qps6"] Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.412019 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9qps6" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.414014 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.415176 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-h246l" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.424178 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.432021 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9qps6"] Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.487342 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-6vfgk"] Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.490204 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6vfgk" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.510914 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6vfgk"] Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.572834 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c3f16040-f11b-405c-b332-7ee5eabac2bd-etc-ovs\") pod \"ovn-controller-ovs-6vfgk\" (UID: \"c3f16040-f11b-405c-b332-7ee5eabac2bd\") " pod="openstack/ovn-controller-ovs-6vfgk" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.572899 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/02ab5d98-13ab-483d-b32b-a509bedd8ded-ovn-controller-tls-certs\") pod \"ovn-controller-9qps6\" (UID: \"02ab5d98-13ab-483d-b32b-a509bedd8ded\") " pod="openstack/ovn-controller-9qps6" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.572958 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvnkh\" (UniqueName: \"kubernetes.io/projected/c3f16040-f11b-405c-b332-7ee5eabac2bd-kube-api-access-nvnkh\") pod \"ovn-controller-ovs-6vfgk\" (UID: \"c3f16040-f11b-405c-b332-7ee5eabac2bd\") " pod="openstack/ovn-controller-ovs-6vfgk" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.572977 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/02ab5d98-13ab-483d-b32b-a509bedd8ded-var-run\") pod \"ovn-controller-9qps6\" (UID: \"02ab5d98-13ab-483d-b32b-a509bedd8ded\") " pod="openstack/ovn-controller-9qps6" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.572997 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3f16040-f11b-405c-b332-7ee5eabac2bd-scripts\") pod \"ovn-controller-ovs-6vfgk\" (UID: \"c3f16040-f11b-405c-b332-7ee5eabac2bd\") " pod="openstack/ovn-controller-ovs-6vfgk" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.573051 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02ab5d98-13ab-483d-b32b-a509bedd8ded-scripts\") pod \"ovn-controller-9qps6\" (UID: \"02ab5d98-13ab-483d-b32b-a509bedd8ded\") " pod="openstack/ovn-controller-9qps6" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.573068 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c3f16040-f11b-405c-b332-7ee5eabac2bd-var-run\") pod \"ovn-controller-ovs-6vfgk\" (UID: \"c3f16040-f11b-405c-b332-7ee5eabac2bd\") " pod="openstack/ovn-controller-ovs-6vfgk" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.573122 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c3f16040-f11b-405c-b332-7ee5eabac2bd-var-log\") pod \"ovn-controller-ovs-6vfgk\" (UID: \"c3f16040-f11b-405c-b332-7ee5eabac2bd\") " pod="openstack/ovn-controller-ovs-6vfgk" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.573139 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c3f16040-f11b-405c-b332-7ee5eabac2bd-var-lib\") pod \"ovn-controller-ovs-6vfgk\" (UID: \"c3f16040-f11b-405c-b332-7ee5eabac2bd\") " pod="openstack/ovn-controller-ovs-6vfgk" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.573156 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4ss2\" (UniqueName: \"kubernetes.io/projected/02ab5d98-13ab-483d-b32b-a509bedd8ded-kube-api-access-j4ss2\") pod \"ovn-controller-9qps6\" (UID: \"02ab5d98-13ab-483d-b32b-a509bedd8ded\") " pod="openstack/ovn-controller-9qps6" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.573175 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/02ab5d98-13ab-483d-b32b-a509bedd8ded-var-run-ovn\") pod \"ovn-controller-9qps6\" (UID: \"02ab5d98-13ab-483d-b32b-a509bedd8ded\") " pod="openstack/ovn-controller-9qps6" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.573189 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02ab5d98-13ab-483d-b32b-a509bedd8ded-combined-ca-bundle\") pod \"ovn-controller-9qps6\" (UID: \"02ab5d98-13ab-483d-b32b-a509bedd8ded\") " pod="openstack/ovn-controller-9qps6" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.573218 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/02ab5d98-13ab-483d-b32b-a509bedd8ded-var-log-ovn\") pod \"ovn-controller-9qps6\" (UID: \"02ab5d98-13ab-483d-b32b-a509bedd8ded\") " pod="openstack/ovn-controller-9qps6" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.674787 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c3f16040-f11b-405c-b332-7ee5eabac2bd-var-log\") pod \"ovn-controller-ovs-6vfgk\" (UID: \"c3f16040-f11b-405c-b332-7ee5eabac2bd\") " pod="openstack/ovn-controller-ovs-6vfgk" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.674854 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c3f16040-f11b-405c-b332-7ee5eabac2bd-var-lib\") pod \"ovn-controller-ovs-6vfgk\" (UID: \"c3f16040-f11b-405c-b332-7ee5eabac2bd\") " pod="openstack/ovn-controller-ovs-6vfgk" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.674871 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4ss2\" (UniqueName: \"kubernetes.io/projected/02ab5d98-13ab-483d-b32b-a509bedd8ded-kube-api-access-j4ss2\") pod \"ovn-controller-9qps6\" (UID: \"02ab5d98-13ab-483d-b32b-a509bedd8ded\") " pod="openstack/ovn-controller-9qps6" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.674896 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/02ab5d98-13ab-483d-b32b-a509bedd8ded-var-run-ovn\") pod \"ovn-controller-9qps6\" (UID: \"02ab5d98-13ab-483d-b32b-a509bedd8ded\") " pod="openstack/ovn-controller-9qps6" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.674910 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02ab5d98-13ab-483d-b32b-a509bedd8ded-combined-ca-bundle\") pod \"ovn-controller-9qps6\" (UID: \"02ab5d98-13ab-483d-b32b-a509bedd8ded\") " pod="openstack/ovn-controller-9qps6" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.674928 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/02ab5d98-13ab-483d-b32b-a509bedd8ded-var-log-ovn\") pod \"ovn-controller-9qps6\" (UID: \"02ab5d98-13ab-483d-b32b-a509bedd8ded\") " pod="openstack/ovn-controller-9qps6" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.674958 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c3f16040-f11b-405c-b332-7ee5eabac2bd-etc-ovs\") pod \"ovn-controller-ovs-6vfgk\" (UID: \"c3f16040-f11b-405c-b332-7ee5eabac2bd\") " pod="openstack/ovn-controller-ovs-6vfgk" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.674973 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/02ab5d98-13ab-483d-b32b-a509bedd8ded-ovn-controller-tls-certs\") pod \"ovn-controller-9qps6\" (UID: \"02ab5d98-13ab-483d-b32b-a509bedd8ded\") " pod="openstack/ovn-controller-9qps6" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.675013 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvnkh\" (UniqueName: \"kubernetes.io/projected/c3f16040-f11b-405c-b332-7ee5eabac2bd-kube-api-access-nvnkh\") pod \"ovn-controller-ovs-6vfgk\" (UID: \"c3f16040-f11b-405c-b332-7ee5eabac2bd\") " pod="openstack/ovn-controller-ovs-6vfgk" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.675030 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/02ab5d98-13ab-483d-b32b-a509bedd8ded-var-run\") pod \"ovn-controller-9qps6\" (UID: \"02ab5d98-13ab-483d-b32b-a509bedd8ded\") " pod="openstack/ovn-controller-9qps6" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.675045 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3f16040-f11b-405c-b332-7ee5eabac2bd-scripts\") pod \"ovn-controller-ovs-6vfgk\" (UID: \"c3f16040-f11b-405c-b332-7ee5eabac2bd\") " pod="openstack/ovn-controller-ovs-6vfgk" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.675067 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02ab5d98-13ab-483d-b32b-a509bedd8ded-scripts\") pod \"ovn-controller-9qps6\" (UID: \"02ab5d98-13ab-483d-b32b-a509bedd8ded\") " pod="openstack/ovn-controller-9qps6" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.675085 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c3f16040-f11b-405c-b332-7ee5eabac2bd-var-run\") pod \"ovn-controller-ovs-6vfgk\" (UID: \"c3f16040-f11b-405c-b332-7ee5eabac2bd\") " pod="openstack/ovn-controller-ovs-6vfgk" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.675546 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/c3f16040-f11b-405c-b332-7ee5eabac2bd-var-run\") pod \"ovn-controller-ovs-6vfgk\" (UID: \"c3f16040-f11b-405c-b332-7ee5eabac2bd\") " pod="openstack/ovn-controller-ovs-6vfgk" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.675590 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/02ab5d98-13ab-483d-b32b-a509bedd8ded-var-log-ovn\") pod \"ovn-controller-9qps6\" (UID: \"02ab5d98-13ab-483d-b32b-a509bedd8ded\") " pod="openstack/ovn-controller-9qps6" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.675673 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/c3f16040-f11b-405c-b332-7ee5eabac2bd-var-log\") pod \"ovn-controller-ovs-6vfgk\" (UID: \"c3f16040-f11b-405c-b332-7ee5eabac2bd\") " pod="openstack/ovn-controller-ovs-6vfgk" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.675717 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/c3f16040-f11b-405c-b332-7ee5eabac2bd-etc-ovs\") pod \"ovn-controller-ovs-6vfgk\" (UID: \"c3f16040-f11b-405c-b332-7ee5eabac2bd\") " pod="openstack/ovn-controller-ovs-6vfgk" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.675773 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/c3f16040-f11b-405c-b332-7ee5eabac2bd-var-lib\") pod \"ovn-controller-ovs-6vfgk\" (UID: \"c3f16040-f11b-405c-b332-7ee5eabac2bd\") " pod="openstack/ovn-controller-ovs-6vfgk" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.675998 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/02ab5d98-13ab-483d-b32b-a509bedd8ded-var-run\") pod \"ovn-controller-9qps6\" (UID: \"02ab5d98-13ab-483d-b32b-a509bedd8ded\") " pod="openstack/ovn-controller-9qps6" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.676039 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/02ab5d98-13ab-483d-b32b-a509bedd8ded-var-run-ovn\") pod \"ovn-controller-9qps6\" (UID: \"02ab5d98-13ab-483d-b32b-a509bedd8ded\") " pod="openstack/ovn-controller-9qps6" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.678339 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/02ab5d98-13ab-483d-b32b-a509bedd8ded-scripts\") pod \"ovn-controller-9qps6\" (UID: \"02ab5d98-13ab-483d-b32b-a509bedd8ded\") " pod="openstack/ovn-controller-9qps6" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.678938 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c3f16040-f11b-405c-b332-7ee5eabac2bd-scripts\") pod \"ovn-controller-ovs-6vfgk\" (UID: \"c3f16040-f11b-405c-b332-7ee5eabac2bd\") " pod="openstack/ovn-controller-ovs-6vfgk" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.680677 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/02ab5d98-13ab-483d-b32b-a509bedd8ded-ovn-controller-tls-certs\") pod \"ovn-controller-9qps6\" (UID: \"02ab5d98-13ab-483d-b32b-a509bedd8ded\") " pod="openstack/ovn-controller-9qps6" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.690395 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02ab5d98-13ab-483d-b32b-a509bedd8ded-combined-ca-bundle\") pod \"ovn-controller-9qps6\" (UID: \"02ab5d98-13ab-483d-b32b-a509bedd8ded\") " pod="openstack/ovn-controller-9qps6" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.691724 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4ss2\" (UniqueName: \"kubernetes.io/projected/02ab5d98-13ab-483d-b32b-a509bedd8ded-kube-api-access-j4ss2\") pod \"ovn-controller-9qps6\" (UID: \"02ab5d98-13ab-483d-b32b-a509bedd8ded\") " pod="openstack/ovn-controller-9qps6" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.699056 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvnkh\" (UniqueName: \"kubernetes.io/projected/c3f16040-f11b-405c-b332-7ee5eabac2bd-kube-api-access-nvnkh\") pod \"ovn-controller-ovs-6vfgk\" (UID: \"c3f16040-f11b-405c-b332-7ee5eabac2bd\") " pod="openstack/ovn-controller-ovs-6vfgk" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.733989 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9qps6" Feb 28 04:51:45 crc kubenswrapper[5014]: I0228 04:51:45.809684 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-6vfgk" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.005332 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.006545 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.009238 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-wvrkg" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.009484 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.010139 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.010225 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.018031 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.023565 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.185127 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwslh\" (UniqueName: \"kubernetes.io/projected/5a44d0e3-2ba4-4d6f-924b-1f516c90a11f-kube-api-access-nwslh\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") " pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.185218 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a44d0e3-2ba4-4d6f-924b-1f516c90a11f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") " pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.185309 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") " pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.185381 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a44d0e3-2ba4-4d6f-924b-1f516c90a11f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") " pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.185440 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a44d0e3-2ba4-4d6f-924b-1f516c90a11f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") " pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.185494 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5a44d0e3-2ba4-4d6f-924b-1f516c90a11f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") " pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.185556 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a44d0e3-2ba4-4d6f-924b-1f516c90a11f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") " pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.185607 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a44d0e3-2ba4-4d6f-924b-1f516c90a11f-config\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") " pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.286564 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a44d0e3-2ba4-4d6f-924b-1f516c90a11f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") " pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.286619 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a44d0e3-2ba4-4d6f-924b-1f516c90a11f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") " pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.286650 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5a44d0e3-2ba4-4d6f-924b-1f516c90a11f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") " pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.286673 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a44d0e3-2ba4-4d6f-924b-1f516c90a11f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") " pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.286693 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a44d0e3-2ba4-4d6f-924b-1f516c90a11f-config\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") " pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.286740 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwslh\" (UniqueName: \"kubernetes.io/projected/5a44d0e3-2ba4-4d6f-924b-1f516c90a11f-kube-api-access-nwslh\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") " pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.286761 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a44d0e3-2ba4-4d6f-924b-1f516c90a11f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") " pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.286797 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") " pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.287265 5014 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.287465 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5a44d0e3-2ba4-4d6f-924b-1f516c90a11f-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") " pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.288631 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a44d0e3-2ba4-4d6f-924b-1f516c90a11f-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") " pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.289559 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a44d0e3-2ba4-4d6f-924b-1f516c90a11f-config\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") " pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.291408 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a44d0e3-2ba4-4d6f-924b-1f516c90a11f-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") " pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.292335 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a44d0e3-2ba4-4d6f-924b-1f516c90a11f-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") " pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.292606 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5a44d0e3-2ba4-4d6f-924b-1f516c90a11f-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") " pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.305127 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwslh\" (UniqueName: \"kubernetes.io/projected/5a44d0e3-2ba4-4d6f-924b-1f516c90a11f-kube-api-access-nwslh\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") " pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.319628 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f\") " pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:46 crc kubenswrapper[5014]: I0228 04:51:46.365770 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 28 04:51:48 crc kubenswrapper[5014]: E0228 04:51:48.764049 5014 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 28 04:51:48 crc kubenswrapper[5014]: E0228 04:51:48.764564 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mh8ng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-wfl7r_openstack(c140535a-737a-4dab-9b80-501657ca3921): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 04:51:48 crc kubenswrapper[5014]: E0228 04:51:48.764941 5014 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 28 04:51:48 crc kubenswrapper[5014]: E0228 04:51:48.765019 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-flwk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-cb8zt_openstack(9f7d64ae-72df-4502-91fc-2c9de87ee05f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 04:51:48 crc kubenswrapper[5014]: E0228 04:51:48.765887 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-wfl7r" podUID="c140535a-737a-4dab-9b80-501657ca3921" Feb 28 04:51:48 crc kubenswrapper[5014]: E0228 04:51:48.766287 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-cb8zt" podUID="9f7d64ae-72df-4502-91fc-2c9de87ee05f" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.192437 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 28 04:51:49 crc kubenswrapper[5014]: W0228 04:51:49.242125 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1c70607_6183_4835_9ce6_fe3ef0d2b6fb.slice/crio-eddddc6441bee537dfe4aa78bb5a159176c285c2d51ae0930cea62709064df57 WatchSource:0}: Error finding container eddddc6441bee537dfe4aa78bb5a159176c285c2d51ae0930cea62709064df57: Status 404 returned error can't find the container with id eddddc6441bee537dfe4aa78bb5a159176c285c2d51ae0930cea62709064df57 Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.344874 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 28 04:51:49 crc kubenswrapper[5014]: W0228 04:51:49.350380 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1420f298_151a_48af_bdb2_a58d5143967c.slice/crio-cd4fcdda30c51578ee8283b3f25fd6fc03f62fd486f10412f0fbae7d5ee54151 WatchSource:0}: Error finding container cd4fcdda30c51578ee8283b3f25fd6fc03f62fd486f10412f0fbae7d5ee54151: Status 404 returned error can't find the container with id cd4fcdda30c51578ee8283b3f25fd6fc03f62fd486f10412f0fbae7d5ee54151 Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.411645 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.517107 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb","Type":"ContainerStarted","Data":"eddddc6441bee537dfe4aa78bb5a159176c285c2d51ae0930cea62709064df57"} Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.533223 5014 generic.go:334] "Generic (PLEG): container finished" podID="6128fe0e-47ba-405d-b527-38b43d9d262c" containerID="071e3fbf996ece5a3dc92746349f4f9cd50a0c6675642707e8e824dffa2be173" exitCode=0 Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.533744 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-66zrm" event={"ID":"6128fe0e-47ba-405d-b527-38b43d9d262c","Type":"ContainerDied","Data":"071e3fbf996ece5a3dc92746349f4f9cd50a0c6675642707e8e824dffa2be173"} Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.534751 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"351fb773-0669-41c0-aee8-0469f34d64c9","Type":"ContainerStarted","Data":"e6db41dc1fdc6643734cd7b0c3b20b5e954611ce2b368a0eef3a854f905053ab"} Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.545018 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"1420f298-151a-48af-bdb2-a58d5143967c","Type":"ContainerStarted","Data":"cd4fcdda30c51578ee8283b3f25fd6fc03f62fd486f10412f0fbae7d5ee54151"} Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.549523 5014 generic.go:334] "Generic (PLEG): container finished" podID="6f39bd6f-3b06-4fbb-886d-b96e77209f53" containerID="2fc04a907aaf3205f61dd158bb0ad1daf10dad80f5bde4a623f3849c1ab06674" exitCode=0 Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.549628 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-gn8d5" event={"ID":"6f39bd6f-3b06-4fbb-886d-b96e77209f53","Type":"ContainerDied","Data":"2fc04a907aaf3205f61dd158bb0ad1daf10dad80f5bde4a623f3849c1ab06674"} Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.575390 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.626381 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.627500 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.631601 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.631667 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.631623 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-kshjx" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.632236 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.638772 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.710738 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 04:51:49 crc kubenswrapper[5014]: W0228 04:51:49.716994 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35f1a99d_7cdf_41d2_8106_e18f5660eb1b.slice/crio-ca0801814ddccc3eab733703a5a47e6c988760014e8a9df2c3621948ce0a44cf WatchSource:0}: Error finding container ca0801814ddccc3eab733703a5a47e6c988760014e8a9df2c3621948ce0a44cf: Status 404 returned error can't find the container with id ca0801814ddccc3eab733703a5a47e6c988760014e8a9df2c3621948ce0a44cf Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.751263 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/569b1ad4-179c-4852-a5fc-509fe31df812-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") " pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.751625 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/569b1ad4-179c-4852-a5fc-509fe31df812-config\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") " pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.751679 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxlz5\" (UniqueName: \"kubernetes.io/projected/569b1ad4-179c-4852-a5fc-509fe31df812-kube-api-access-nxlz5\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") " pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.751698 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/569b1ad4-179c-4852-a5fc-509fe31df812-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") " pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.751758 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") " pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.751925 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/569b1ad4-179c-4852-a5fc-509fe31df812-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") " pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.751949 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/569b1ad4-179c-4852-a5fc-509fe31df812-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") " pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.753017 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569b1ad4-179c-4852-a5fc-509fe31df812-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") " pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.762416 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 04:51:49 crc kubenswrapper[5014]: W0228 04:51:49.776123 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46e75f06_c8df_44b8_a6e4_8f663e8b0a1a.slice/crio-7501ac43739d89a52c67369e86cd763ac003cab29148cd582786f315d5f67f7d WatchSource:0}: Error finding container 7501ac43739d89a52c67369e86cd763ac003cab29148cd582786f315d5f67f7d: Status 404 returned error can't find the container with id 7501ac43739d89a52c67369e86cd763ac003cab29148cd582786f315d5f67f7d Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.839555 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9qps6"] Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.854411 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/569b1ad4-179c-4852-a5fc-509fe31df812-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") " pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.854448 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/569b1ad4-179c-4852-a5fc-509fe31df812-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") " pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.854479 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569b1ad4-179c-4852-a5fc-509fe31df812-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") " pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.854517 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/569b1ad4-179c-4852-a5fc-509fe31df812-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") " pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.854535 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/569b1ad4-179c-4852-a5fc-509fe31df812-config\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") " pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.854581 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxlz5\" (UniqueName: \"kubernetes.io/projected/569b1ad4-179c-4852-a5fc-509fe31df812-kube-api-access-nxlz5\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") " pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.854597 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/569b1ad4-179c-4852-a5fc-509fe31df812-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") " pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.854623 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") " pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.855004 5014 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.855333 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/569b1ad4-179c-4852-a5fc-509fe31df812-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") " pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.856169 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/569b1ad4-179c-4852-a5fc-509fe31df812-config\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") " pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.857205 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/569b1ad4-179c-4852-a5fc-509fe31df812-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") " pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.866712 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/569b1ad4-179c-4852-a5fc-509fe31df812-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") " pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.872181 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/569b1ad4-179c-4852-a5fc-509fe31df812-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") " pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.874999 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/569b1ad4-179c-4852-a5fc-509fe31df812-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") " pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.875970 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxlz5\" (UniqueName: \"kubernetes.io/projected/569b1ad4-179c-4852-a5fc-509fe31df812-kube-api-access-nxlz5\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") " pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: E0228 04:51:49.897637 5014 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 28 04:51:49 crc kubenswrapper[5014]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/6f39bd6f-3b06-4fbb-886d-b96e77209f53/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 28 04:51:49 crc kubenswrapper[5014]: > podSandboxID="ce5befd8915dd191ead47241abc9ec6768015873fb0f92d256929f1d71a00e22" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.898096 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 28 04:51:49 crc kubenswrapper[5014]: E0228 04:51:49.898196 5014 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 28 04:51:49 crc kubenswrapper[5014]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-79gjl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-gn8d5_openstack(6f39bd6f-3b06-4fbb-886d-b96e77209f53): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/6f39bd6f-3b06-4fbb-886d-b96e77209f53/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 28 04:51:49 crc kubenswrapper[5014]: > logger="UnhandledError" Feb 28 04:51:49 crc kubenswrapper[5014]: E0228 04:51:49.901688 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/6f39bd6f-3b06-4fbb-886d-b96e77209f53/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-5ccc8479f9-gn8d5" podUID="6f39bd6f-3b06-4fbb-886d-b96e77209f53" Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.903696 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-sb-0\" (UID: \"569b1ad4-179c-4852-a5fc-509fe31df812\") " pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:49 crc kubenswrapper[5014]: W0228 04:51:49.933006 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5a44d0e3_2ba4_4d6f_924b_1f516c90a11f.slice/crio-ec0c8acbb72a692bfb8dccb8752b13e06ccb50971e7ca67cbfb830ffd78cf00b WatchSource:0}: Error finding container ec0c8acbb72a692bfb8dccb8752b13e06ccb50971e7ca67cbfb830ffd78cf00b: Status 404 returned error can't find the container with id ec0c8acbb72a692bfb8dccb8752b13e06ccb50971e7ca67cbfb830ffd78cf00b Feb 28 04:51:49 crc kubenswrapper[5014]: I0228 04:51:49.960872 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.028090 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-6vfgk"] Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.050987 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-cb8zt" Feb 28 04:51:50 crc kubenswrapper[5014]: W0228 04:51:50.060724 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3f16040_f11b_405c_b332_7ee5eabac2bd.slice/crio-8edd72724fbfa6a5ab4862be79adf59b48ecf77b5231cde40d6da38b875606d9 WatchSource:0}: Error finding container 8edd72724fbfa6a5ab4862be79adf59b48ecf77b5231cde40d6da38b875606d9: Status 404 returned error can't find the container with id 8edd72724fbfa6a5ab4862be79adf59b48ecf77b5231cde40d6da38b875606d9 Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.077608 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-wfl7r" Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.162920 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh8ng\" (UniqueName: \"kubernetes.io/projected/c140535a-737a-4dab-9b80-501657ca3921-kube-api-access-mh8ng\") pod \"c140535a-737a-4dab-9b80-501657ca3921\" (UID: \"c140535a-737a-4dab-9b80-501657ca3921\") " Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.162986 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flwk9\" (UniqueName: \"kubernetes.io/projected/9f7d64ae-72df-4502-91fc-2c9de87ee05f-kube-api-access-flwk9\") pod \"9f7d64ae-72df-4502-91fc-2c9de87ee05f\" (UID: \"9f7d64ae-72df-4502-91fc-2c9de87ee05f\") " Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.163014 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f7d64ae-72df-4502-91fc-2c9de87ee05f-config\") pod \"9f7d64ae-72df-4502-91fc-2c9de87ee05f\" (UID: \"9f7d64ae-72df-4502-91fc-2c9de87ee05f\") " Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.163059 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c140535a-737a-4dab-9b80-501657ca3921-dns-svc\") pod \"c140535a-737a-4dab-9b80-501657ca3921\" (UID: \"c140535a-737a-4dab-9b80-501657ca3921\") " Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.163081 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c140535a-737a-4dab-9b80-501657ca3921-config\") pod \"c140535a-737a-4dab-9b80-501657ca3921\" (UID: \"c140535a-737a-4dab-9b80-501657ca3921\") " Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.163581 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f7d64ae-72df-4502-91fc-2c9de87ee05f-config" (OuterVolumeSpecName: "config") pod "9f7d64ae-72df-4502-91fc-2c9de87ee05f" (UID: "9f7d64ae-72df-4502-91fc-2c9de87ee05f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.163687 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c140535a-737a-4dab-9b80-501657ca3921-config" (OuterVolumeSpecName: "config") pod "c140535a-737a-4dab-9b80-501657ca3921" (UID: "c140535a-737a-4dab-9b80-501657ca3921"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.163719 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c140535a-737a-4dab-9b80-501657ca3921-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c140535a-737a-4dab-9b80-501657ca3921" (UID: "c140535a-737a-4dab-9b80-501657ca3921"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.167551 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f7d64ae-72df-4502-91fc-2c9de87ee05f-kube-api-access-flwk9" (OuterVolumeSpecName: "kube-api-access-flwk9") pod "9f7d64ae-72df-4502-91fc-2c9de87ee05f" (UID: "9f7d64ae-72df-4502-91fc-2c9de87ee05f"). InnerVolumeSpecName "kube-api-access-flwk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.167733 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c140535a-737a-4dab-9b80-501657ca3921-kube-api-access-mh8ng" (OuterVolumeSpecName: "kube-api-access-mh8ng") pod "c140535a-737a-4dab-9b80-501657ca3921" (UID: "c140535a-737a-4dab-9b80-501657ca3921"). InnerVolumeSpecName "kube-api-access-mh8ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.265834 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh8ng\" (UniqueName: \"kubernetes.io/projected/c140535a-737a-4dab-9b80-501657ca3921-kube-api-access-mh8ng\") on node \"crc\" DevicePath \"\"" Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.265871 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flwk9\" (UniqueName: \"kubernetes.io/projected/9f7d64ae-72df-4502-91fc-2c9de87ee05f-kube-api-access-flwk9\") on node \"crc\" DevicePath \"\"" Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.265881 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f7d64ae-72df-4502-91fc-2c9de87ee05f-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.265910 5014 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c140535a-737a-4dab-9b80-501657ca3921-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.265922 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c140535a-737a-4dab-9b80-501657ca3921-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.448833 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 28 04:51:50 crc kubenswrapper[5014]: W0228 04:51:50.468785 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod569b1ad4_179c_4852_a5fc_509fe31df812.slice/crio-3fa590fb24ae633e89aa68a30b90975359491a6273bb043c595ef7543075144e WatchSource:0}: Error finding container 3fa590fb24ae633e89aa68a30b90975359491a6273bb043c595ef7543075144e: Status 404 returned error can't find the container with id 3fa590fb24ae633e89aa68a30b90975359491a6273bb043c595ef7543075144e Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.562464 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f","Type":"ContainerStarted","Data":"ec0c8acbb72a692bfb8dccb8752b13e06ccb50971e7ca67cbfb830ffd78cf00b"} Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.565133 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-cb8zt" Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.565141 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-cb8zt" event={"ID":"9f7d64ae-72df-4502-91fc-2c9de87ee05f","Type":"ContainerDied","Data":"de20b4a5c18221aa29c131700cd8bbf2e4d2df4a0900df1573644bef202c3963"} Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.577575 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6vfgk" event={"ID":"c3f16040-f11b-405c-b332-7ee5eabac2bd","Type":"ContainerStarted","Data":"8edd72724fbfa6a5ab4862be79adf59b48ecf77b5231cde40d6da38b875606d9"} Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.586053 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a","Type":"ContainerStarted","Data":"7501ac43739d89a52c67369e86cd763ac003cab29148cd582786f315d5f67f7d"} Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.587976 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9qps6" event={"ID":"02ab5d98-13ab-483d-b32b-a509bedd8ded","Type":"ContainerStarted","Data":"9d4bba6c5dea706ecdf2c0782d47f651d5b91f7308f0cdfbfcc2963e9b339b4c"} Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.592260 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-66zrm" event={"ID":"6128fe0e-47ba-405d-b527-38b43d9d262c","Type":"ContainerStarted","Data":"4cd0a41878f60274eeadba76846329056969aead85c7097e759f1e98d257f882"} Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.593582 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-66zrm" Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.599160 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ac71caa8-2f63-4b64-8d37-a1b364b62158","Type":"ContainerStarted","Data":"346d30390f531bf6c939aad4961ac7e1fddab02de68429fbf9e28d9115cae162"} Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.600899 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-wfl7r" Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.600905 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-wfl7r" event={"ID":"c140535a-737a-4dab-9b80-501657ca3921","Type":"ContainerDied","Data":"33c83437c62e28e07f8d15d000f0ed8a382b0aef0b8f5d80642fcfa7ddea9b58"} Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.602910 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"569b1ad4-179c-4852-a5fc-509fe31df812","Type":"ContainerStarted","Data":"3fa590fb24ae633e89aa68a30b90975359491a6273bb043c595ef7543075144e"} Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.608905 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-cb8zt"] Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.608939 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"35f1a99d-7cdf-41d2-8106-e18f5660eb1b","Type":"ContainerStarted","Data":"ca0801814ddccc3eab733703a5a47e6c988760014e8a9df2c3621948ce0a44cf"} Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.616167 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-cb8zt"] Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.619908 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-66zrm" podStartSLOduration=2.405110806 podStartE2EDuration="14.619888217s" podCreationTimestamp="2026-02-28 04:51:36 +0000 UTC" firstStartedPulling="2026-02-28 04:51:37.013467058 +0000 UTC m=+1085.683592968" lastFinishedPulling="2026-02-28 04:51:49.228244469 +0000 UTC m=+1097.898370379" observedRunningTime="2026-02-28 04:51:50.614480001 +0000 UTC m=+1099.284605911" watchObservedRunningTime="2026-02-28 04:51:50.619888217 +0000 UTC m=+1099.290014127" Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.666064 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-wfl7r"] Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.671877 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-wfl7r"] Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.920479 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-mgzdl"] Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.921394 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mgzdl" Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.923281 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 28 04:51:50 crc kubenswrapper[5014]: I0228 04:51:50.952657 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mgzdl"] Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.056000 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-66zrm"] Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.082655 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43eb6c14-8ca4-41ba-9ee2-7326edcab237-combined-ca-bundle\") pod \"ovn-controller-metrics-mgzdl\" (UID: \"43eb6c14-8ca4-41ba-9ee2-7326edcab237\") " pod="openstack/ovn-controller-metrics-mgzdl" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.082758 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43eb6c14-8ca4-41ba-9ee2-7326edcab237-config\") pod \"ovn-controller-metrics-mgzdl\" (UID: \"43eb6c14-8ca4-41ba-9ee2-7326edcab237\") " pod="openstack/ovn-controller-metrics-mgzdl" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.082789 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/43eb6c14-8ca4-41ba-9ee2-7326edcab237-ovs-rundir\") pod \"ovn-controller-metrics-mgzdl\" (UID: \"43eb6c14-8ca4-41ba-9ee2-7326edcab237\") " pod="openstack/ovn-controller-metrics-mgzdl" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.082842 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/43eb6c14-8ca4-41ba-9ee2-7326edcab237-ovn-rundir\") pod \"ovn-controller-metrics-mgzdl\" (UID: \"43eb6c14-8ca4-41ba-9ee2-7326edcab237\") " pod="openstack/ovn-controller-metrics-mgzdl" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.082864 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddcf8\" (UniqueName: \"kubernetes.io/projected/43eb6c14-8ca4-41ba-9ee2-7326edcab237-kube-api-access-ddcf8\") pod \"ovn-controller-metrics-mgzdl\" (UID: \"43eb6c14-8ca4-41ba-9ee2-7326edcab237\") " pod="openstack/ovn-controller-metrics-mgzdl" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.082914 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43eb6c14-8ca4-41ba-9ee2-7326edcab237-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mgzdl\" (UID: \"43eb6c14-8ca4-41ba-9ee2-7326edcab237\") " pod="openstack/ovn-controller-metrics-mgzdl" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.103881 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-hsmzr"] Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.105930 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.109187 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.115359 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-hsmzr"] Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.192633 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8qjm\" (UniqueName: \"kubernetes.io/projected/0fe0cfd6-ec18-4221-8722-8be777814e26-kube-api-access-b8qjm\") pod \"dnsmasq-dns-7f896c8c65-hsmzr\" (UID: \"0fe0cfd6-ec18-4221-8722-8be777814e26\") " pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.192684 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fe0cfd6-ec18-4221-8722-8be777814e26-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-hsmzr\" (UID: \"0fe0cfd6-ec18-4221-8722-8be777814e26\") " pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.192707 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43eb6c14-8ca4-41ba-9ee2-7326edcab237-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mgzdl\" (UID: \"43eb6c14-8ca4-41ba-9ee2-7326edcab237\") " pod="openstack/ovn-controller-metrics-mgzdl" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.192787 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43eb6c14-8ca4-41ba-9ee2-7326edcab237-combined-ca-bundle\") pod \"ovn-controller-metrics-mgzdl\" (UID: \"43eb6c14-8ca4-41ba-9ee2-7326edcab237\") " pod="openstack/ovn-controller-metrics-mgzdl" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.192819 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fe0cfd6-ec18-4221-8722-8be777814e26-config\") pod \"dnsmasq-dns-7f896c8c65-hsmzr\" (UID: \"0fe0cfd6-ec18-4221-8722-8be777814e26\") " pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.192847 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fe0cfd6-ec18-4221-8722-8be777814e26-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-hsmzr\" (UID: \"0fe0cfd6-ec18-4221-8722-8be777814e26\") " pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.192884 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43eb6c14-8ca4-41ba-9ee2-7326edcab237-config\") pod \"ovn-controller-metrics-mgzdl\" (UID: \"43eb6c14-8ca4-41ba-9ee2-7326edcab237\") " pod="openstack/ovn-controller-metrics-mgzdl" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.192900 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/43eb6c14-8ca4-41ba-9ee2-7326edcab237-ovs-rundir\") pod \"ovn-controller-metrics-mgzdl\" (UID: \"43eb6c14-8ca4-41ba-9ee2-7326edcab237\") " pod="openstack/ovn-controller-metrics-mgzdl" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.192922 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/43eb6c14-8ca4-41ba-9ee2-7326edcab237-ovn-rundir\") pod \"ovn-controller-metrics-mgzdl\" (UID: \"43eb6c14-8ca4-41ba-9ee2-7326edcab237\") " pod="openstack/ovn-controller-metrics-mgzdl" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.192939 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddcf8\" (UniqueName: \"kubernetes.io/projected/43eb6c14-8ca4-41ba-9ee2-7326edcab237-kube-api-access-ddcf8\") pod \"ovn-controller-metrics-mgzdl\" (UID: \"43eb6c14-8ca4-41ba-9ee2-7326edcab237\") " pod="openstack/ovn-controller-metrics-mgzdl" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.208014 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/43eb6c14-8ca4-41ba-9ee2-7326edcab237-ovs-rundir\") pod \"ovn-controller-metrics-mgzdl\" (UID: \"43eb6c14-8ca4-41ba-9ee2-7326edcab237\") " pod="openstack/ovn-controller-metrics-mgzdl" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.208127 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/43eb6c14-8ca4-41ba-9ee2-7326edcab237-ovn-rundir\") pod \"ovn-controller-metrics-mgzdl\" (UID: \"43eb6c14-8ca4-41ba-9ee2-7326edcab237\") " pod="openstack/ovn-controller-metrics-mgzdl" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.212055 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/43eb6c14-8ca4-41ba-9ee2-7326edcab237-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-mgzdl\" (UID: \"43eb6c14-8ca4-41ba-9ee2-7326edcab237\") " pod="openstack/ovn-controller-metrics-mgzdl" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.220215 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43eb6c14-8ca4-41ba-9ee2-7326edcab237-config\") pod \"ovn-controller-metrics-mgzdl\" (UID: \"43eb6c14-8ca4-41ba-9ee2-7326edcab237\") " pod="openstack/ovn-controller-metrics-mgzdl" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.224541 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-gn8d5"] Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.226760 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43eb6c14-8ca4-41ba-9ee2-7326edcab237-combined-ca-bundle\") pod \"ovn-controller-metrics-mgzdl\" (UID: \"43eb6c14-8ca4-41ba-9ee2-7326edcab237\") " pod="openstack/ovn-controller-metrics-mgzdl" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.232519 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddcf8\" (UniqueName: \"kubernetes.io/projected/43eb6c14-8ca4-41ba-9ee2-7326edcab237-kube-api-access-ddcf8\") pod \"ovn-controller-metrics-mgzdl\" (UID: \"43eb6c14-8ca4-41ba-9ee2-7326edcab237\") " pod="openstack/ovn-controller-metrics-mgzdl" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.246069 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-mgzdl" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.251108 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-h9blq"] Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.256541 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.261375 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.277704 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-h9blq"] Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.294302 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fe0cfd6-ec18-4221-8722-8be777814e26-config\") pod \"dnsmasq-dns-7f896c8c65-hsmzr\" (UID: \"0fe0cfd6-ec18-4221-8722-8be777814e26\") " pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.294348 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fe0cfd6-ec18-4221-8722-8be777814e26-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-hsmzr\" (UID: \"0fe0cfd6-ec18-4221-8722-8be777814e26\") " pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.294419 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8qjm\" (UniqueName: \"kubernetes.io/projected/0fe0cfd6-ec18-4221-8722-8be777814e26-kube-api-access-b8qjm\") pod \"dnsmasq-dns-7f896c8c65-hsmzr\" (UID: \"0fe0cfd6-ec18-4221-8722-8be777814e26\") " pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.294435 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fe0cfd6-ec18-4221-8722-8be777814e26-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-hsmzr\" (UID: \"0fe0cfd6-ec18-4221-8722-8be777814e26\") " pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.295224 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fe0cfd6-ec18-4221-8722-8be777814e26-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-hsmzr\" (UID: \"0fe0cfd6-ec18-4221-8722-8be777814e26\") " pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.295719 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fe0cfd6-ec18-4221-8722-8be777814e26-config\") pod \"dnsmasq-dns-7f896c8c65-hsmzr\" (UID: \"0fe0cfd6-ec18-4221-8722-8be777814e26\") " pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.296281 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fe0cfd6-ec18-4221-8722-8be777814e26-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-hsmzr\" (UID: \"0fe0cfd6-ec18-4221-8722-8be777814e26\") " pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.333153 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8qjm\" (UniqueName: \"kubernetes.io/projected/0fe0cfd6-ec18-4221-8722-8be777814e26-kube-api-access-b8qjm\") pod \"dnsmasq-dns-7f896c8c65-hsmzr\" (UID: \"0fe0cfd6-ec18-4221-8722-8be777814e26\") " pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.395976 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-h9blq\" (UID: \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\") " pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.396150 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-config\") pod \"dnsmasq-dns-86db49b7ff-h9blq\" (UID: \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\") " pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.396210 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-h9blq\" (UID: \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\") " pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.396237 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smzbc\" (UniqueName: \"kubernetes.io/projected/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-kube-api-access-smzbc\") pod \"dnsmasq-dns-86db49b7ff-h9blq\" (UID: \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\") " pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.396385 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-h9blq\" (UID: \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\") " pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.451085 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.498367 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-h9blq\" (UID: \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\") " pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.498417 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smzbc\" (UniqueName: \"kubernetes.io/projected/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-kube-api-access-smzbc\") pod \"dnsmasq-dns-86db49b7ff-h9blq\" (UID: \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\") " pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.498469 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-h9blq\" (UID: \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\") " pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.498518 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-h9blq\" (UID: \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\") " pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.498567 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-config\") pod \"dnsmasq-dns-86db49b7ff-h9blq\" (UID: \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\") " pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.499428 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-config\") pod \"dnsmasq-dns-86db49b7ff-h9blq\" (UID: \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\") " pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.499510 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-h9blq\" (UID: \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\") " pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.499877 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-h9blq\" (UID: \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\") " pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.500202 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-h9blq\" (UID: \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\") " pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.520578 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smzbc\" (UniqueName: \"kubernetes.io/projected/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-kube-api-access-smzbc\") pod \"dnsmasq-dns-86db49b7ff-h9blq\" (UID: \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\") " pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" Feb 28 04:51:51 crc kubenswrapper[5014]: I0228 04:51:51.599639 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" Feb 28 04:51:52 crc kubenswrapper[5014]: I0228 04:51:52.184654 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f7d64ae-72df-4502-91fc-2c9de87ee05f" path="/var/lib/kubelet/pods/9f7d64ae-72df-4502-91fc-2c9de87ee05f/volumes" Feb 28 04:51:52 crc kubenswrapper[5014]: I0228 04:51:52.185466 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c140535a-737a-4dab-9b80-501657ca3921" path="/var/lib/kubelet/pods/c140535a-737a-4dab-9b80-501657ca3921/volumes" Feb 28 04:51:52 crc kubenswrapper[5014]: I0228 04:51:52.625076 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-66zrm" podUID="6128fe0e-47ba-405d-b527-38b43d9d262c" containerName="dnsmasq-dns" containerID="cri-o://4cd0a41878f60274eeadba76846329056969aead85c7097e759f1e98d257f882" gracePeriod=10 Feb 28 04:51:53 crc kubenswrapper[5014]: I0228 04:51:53.635292 5014 generic.go:334] "Generic (PLEG): container finished" podID="6128fe0e-47ba-405d-b527-38b43d9d262c" containerID="4cd0a41878f60274eeadba76846329056969aead85c7097e759f1e98d257f882" exitCode=0 Feb 28 04:51:53 crc kubenswrapper[5014]: I0228 04:51:53.635345 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-66zrm" event={"ID":"6128fe0e-47ba-405d-b527-38b43d9d262c","Type":"ContainerDied","Data":"4cd0a41878f60274eeadba76846329056969aead85c7097e759f1e98d257f882"} Feb 28 04:51:57 crc kubenswrapper[5014]: I0228 04:51:57.336847 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-66zrm" Feb 28 04:51:57 crc kubenswrapper[5014]: I0228 04:51:57.393004 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-969tz\" (UniqueName: \"kubernetes.io/projected/6128fe0e-47ba-405d-b527-38b43d9d262c-kube-api-access-969tz\") pod \"6128fe0e-47ba-405d-b527-38b43d9d262c\" (UID: \"6128fe0e-47ba-405d-b527-38b43d9d262c\") " Feb 28 04:51:57 crc kubenswrapper[5014]: I0228 04:51:57.393413 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6128fe0e-47ba-405d-b527-38b43d9d262c-dns-svc\") pod \"6128fe0e-47ba-405d-b527-38b43d9d262c\" (UID: \"6128fe0e-47ba-405d-b527-38b43d9d262c\") " Feb 28 04:51:57 crc kubenswrapper[5014]: I0228 04:51:57.393624 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6128fe0e-47ba-405d-b527-38b43d9d262c-config\") pod \"6128fe0e-47ba-405d-b527-38b43d9d262c\" (UID: \"6128fe0e-47ba-405d-b527-38b43d9d262c\") " Feb 28 04:51:57 crc kubenswrapper[5014]: I0228 04:51:57.397748 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6128fe0e-47ba-405d-b527-38b43d9d262c-kube-api-access-969tz" (OuterVolumeSpecName: "kube-api-access-969tz") pod "6128fe0e-47ba-405d-b527-38b43d9d262c" (UID: "6128fe0e-47ba-405d-b527-38b43d9d262c"). InnerVolumeSpecName "kube-api-access-969tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:51:57 crc kubenswrapper[5014]: I0228 04:51:57.430157 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6128fe0e-47ba-405d-b527-38b43d9d262c-config" (OuterVolumeSpecName: "config") pod "6128fe0e-47ba-405d-b527-38b43d9d262c" (UID: "6128fe0e-47ba-405d-b527-38b43d9d262c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:51:57 crc kubenswrapper[5014]: I0228 04:51:57.443066 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6128fe0e-47ba-405d-b527-38b43d9d262c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6128fe0e-47ba-405d-b527-38b43d9d262c" (UID: "6128fe0e-47ba-405d-b527-38b43d9d262c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:51:57 crc kubenswrapper[5014]: I0228 04:51:57.500056 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6128fe0e-47ba-405d-b527-38b43d9d262c-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:51:57 crc kubenswrapper[5014]: I0228 04:51:57.500110 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-969tz\" (UniqueName: \"kubernetes.io/projected/6128fe0e-47ba-405d-b527-38b43d9d262c-kube-api-access-969tz\") on node \"crc\" DevicePath \"\"" Feb 28 04:51:57 crc kubenswrapper[5014]: I0228 04:51:57.500128 5014 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6128fe0e-47ba-405d-b527-38b43d9d262c-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 04:51:57 crc kubenswrapper[5014]: I0228 04:51:57.676692 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-66zrm" event={"ID":"6128fe0e-47ba-405d-b527-38b43d9d262c","Type":"ContainerDied","Data":"d044c9d9c7f8b6e2edec0b6318ba8a150ee53a9c53e68088a6a0387f07eeffc3"} Feb 28 04:51:57 crc kubenswrapper[5014]: I0228 04:51:57.676767 5014 scope.go:117] "RemoveContainer" containerID="4cd0a41878f60274eeadba76846329056969aead85c7097e759f1e98d257f882" Feb 28 04:51:57 crc kubenswrapper[5014]: I0228 04:51:57.676773 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-66zrm" Feb 28 04:51:57 crc kubenswrapper[5014]: I0228 04:51:57.718265 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-66zrm"] Feb 28 04:51:57 crc kubenswrapper[5014]: I0228 04:51:57.727716 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-66zrm"] Feb 28 04:51:57 crc kubenswrapper[5014]: I0228 04:51:57.980566 5014 scope.go:117] "RemoveContainer" containerID="071e3fbf996ece5a3dc92746349f4f9cd50a0c6675642707e8e824dffa2be173" Feb 28 04:51:58 crc kubenswrapper[5014]: I0228 04:51:58.061398 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-mgzdl"] Feb 28 04:51:58 crc kubenswrapper[5014]: I0228 04:51:58.164781 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-hsmzr"] Feb 28 04:51:58 crc kubenswrapper[5014]: I0228 04:51:58.181005 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6128fe0e-47ba-405d-b527-38b43d9d262c" path="/var/lib/kubelet/pods/6128fe0e-47ba-405d-b527-38b43d9d262c/volumes" Feb 28 04:51:58 crc kubenswrapper[5014]: I0228 04:51:58.227324 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-h9blq"] Feb 28 04:51:58 crc kubenswrapper[5014]: W0228 04:51:58.450348 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43eb6c14_8ca4_41ba_9ee2_7326edcab237.slice/crio-baaba6b6c08cd4c29b27777523fa74ce51298719ca927b918c68c8bd500ecb20 WatchSource:0}: Error finding container baaba6b6c08cd4c29b27777523fa74ce51298719ca927b918c68c8bd500ecb20: Status 404 returned error can't find the container with id baaba6b6c08cd4c29b27777523fa74ce51298719ca927b918c68c8bd500ecb20 Feb 28 04:51:58 crc kubenswrapper[5014]: W0228 04:51:58.464623 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6bfcf6cb_666f_44c2_885b_f916a1e81b8f.slice/crio-5333c16832b171490af02df9a6c507adc8f7e1b74465366d29d500f9463b2cba WatchSource:0}: Error finding container 5333c16832b171490af02df9a6c507adc8f7e1b74465366d29d500f9463b2cba: Status 404 returned error can't find the container with id 5333c16832b171490af02df9a6c507adc8f7e1b74465366d29d500f9463b2cba Feb 28 04:51:58 crc kubenswrapper[5014]: I0228 04:51:58.688099 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" event={"ID":"0fe0cfd6-ec18-4221-8722-8be777814e26","Type":"ContainerStarted","Data":"85ed0c890151727149c158feb460fe8f1416c5f345cf1c807d2a2b5cd4e1c1b4"} Feb 28 04:51:58 crc kubenswrapper[5014]: I0228 04:51:58.689268 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mgzdl" event={"ID":"43eb6c14-8ca4-41ba-9ee2-7326edcab237","Type":"ContainerStarted","Data":"baaba6b6c08cd4c29b27777523fa74ce51298719ca927b918c68c8bd500ecb20"} Feb 28 04:51:58 crc kubenswrapper[5014]: I0228 04:51:58.691110 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-gn8d5" event={"ID":"6f39bd6f-3b06-4fbb-886d-b96e77209f53","Type":"ContainerStarted","Data":"8a9c6a52151a3072d41f884b74b1f1ba2df8bfe6a0f566841948e5b37af94750"} Feb 28 04:51:58 crc kubenswrapper[5014]: I0228 04:51:58.691232 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-gn8d5" podUID="6f39bd6f-3b06-4fbb-886d-b96e77209f53" containerName="dnsmasq-dns" containerID="cri-o://8a9c6a52151a3072d41f884b74b1f1ba2df8bfe6a0f566841948e5b37af94750" gracePeriod=10 Feb 28 04:51:58 crc kubenswrapper[5014]: I0228 04:51:58.691455 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-gn8d5" Feb 28 04:51:58 crc kubenswrapper[5014]: I0228 04:51:58.702796 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" event={"ID":"6bfcf6cb-666f-44c2-885b-f916a1e81b8f","Type":"ContainerStarted","Data":"5333c16832b171490af02df9a6c507adc8f7e1b74465366d29d500f9463b2cba"} Feb 28 04:51:58 crc kubenswrapper[5014]: I0228 04:51:58.709581 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-gn8d5" podStartSLOduration=10.452637147 podStartE2EDuration="22.709565245s" podCreationTimestamp="2026-02-28 04:51:36 +0000 UTC" firstStartedPulling="2026-02-28 04:51:36.973219082 +0000 UTC m=+1085.643344992" lastFinishedPulling="2026-02-28 04:51:49.23014718 +0000 UTC m=+1097.900273090" observedRunningTime="2026-02-28 04:51:58.709271188 +0000 UTC m=+1107.379397098" watchObservedRunningTime="2026-02-28 04:51:58.709565245 +0000 UTC m=+1107.379691155" Feb 28 04:51:59 crc kubenswrapper[5014]: I0228 04:51:59.733743 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"1420f298-151a-48af-bdb2-a58d5143967c","Type":"ContainerStarted","Data":"b265106ff434ab1e6f6d7167d1c798dcda5ff0c751f387a71d367b86884cc173"} Feb 28 04:51:59 crc kubenswrapper[5014]: I0228 04:51:59.734144 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 28 04:51:59 crc kubenswrapper[5014]: I0228 04:51:59.739351 5014 generic.go:334] "Generic (PLEG): container finished" podID="6f39bd6f-3b06-4fbb-886d-b96e77209f53" containerID="8a9c6a52151a3072d41f884b74b1f1ba2df8bfe6a0f566841948e5b37af94750" exitCode=0 Feb 28 04:51:59 crc kubenswrapper[5014]: I0228 04:51:59.739528 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-gn8d5" event={"ID":"6f39bd6f-3b06-4fbb-886d-b96e77209f53","Type":"ContainerDied","Data":"8a9c6a52151a3072d41f884b74b1f1ba2df8bfe6a0f566841948e5b37af94750"} Feb 28 04:51:59 crc kubenswrapper[5014]: I0228 04:51:59.739741 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-gn8d5" event={"ID":"6f39bd6f-3b06-4fbb-886d-b96e77209f53","Type":"ContainerDied","Data":"ce5befd8915dd191ead47241abc9ec6768015873fb0f92d256929f1d71a00e22"} Feb 28 04:51:59 crc kubenswrapper[5014]: I0228 04:51:59.739883 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce5befd8915dd191ead47241abc9ec6768015873fb0f92d256929f1d71a00e22" Feb 28 04:51:59 crc kubenswrapper[5014]: I0228 04:51:59.743493 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb","Type":"ContainerStarted","Data":"efe4a02cfc23275439496a991dd41ea99b1666f3a2bb408b1a3771280c7e5bb6"} Feb 28 04:51:59 crc kubenswrapper[5014]: I0228 04:51:59.747575 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f","Type":"ContainerStarted","Data":"e49bfa7d2fbcd9a59a3636e2d84522a075823cfeac76405d5a8615795b2cda3f"} Feb 28 04:51:59 crc kubenswrapper[5014]: I0228 04:51:59.755934 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ac71caa8-2f63-4b64-8d37-a1b364b62158","Type":"ContainerStarted","Data":"4ff81fad405af183be13f59f3f3f381894b6c5fd0a062f8a5f4987e418c15fd4"} Feb 28 04:51:59 crc kubenswrapper[5014]: I0228 04:51:59.761550 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=11.873194713 podStartE2EDuration="19.761529266s" podCreationTimestamp="2026-02-28 04:51:40 +0000 UTC" firstStartedPulling="2026-02-28 04:51:49.359161981 +0000 UTC m=+1098.029287891" lastFinishedPulling="2026-02-28 04:51:57.247496534 +0000 UTC m=+1105.917622444" observedRunningTime="2026-02-28 04:51:59.757240161 +0000 UTC m=+1108.427366081" watchObservedRunningTime="2026-02-28 04:51:59.761529266 +0000 UTC m=+1108.431655176" Feb 28 04:51:59 crc kubenswrapper[5014]: I0228 04:51:59.767559 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6vfgk" event={"ID":"c3f16040-f11b-405c-b332-7ee5eabac2bd","Type":"ContainerStarted","Data":"b0c4b9eb4e76c36fdcdc20b80b9e710838f98a1fcaf083adfcd4de2bd9a235d6"} Feb 28 04:51:59 crc kubenswrapper[5014]: I0228 04:51:59.872646 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-gn8d5" Feb 28 04:51:59 crc kubenswrapper[5014]: I0228 04:51:59.951294 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f39bd6f-3b06-4fbb-886d-b96e77209f53-dns-svc\") pod \"6f39bd6f-3b06-4fbb-886d-b96e77209f53\" (UID: \"6f39bd6f-3b06-4fbb-886d-b96e77209f53\") " Feb 28 04:51:59 crc kubenswrapper[5014]: I0228 04:51:59.951341 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f39bd6f-3b06-4fbb-886d-b96e77209f53-config\") pod \"6f39bd6f-3b06-4fbb-886d-b96e77209f53\" (UID: \"6f39bd6f-3b06-4fbb-886d-b96e77209f53\") " Feb 28 04:51:59 crc kubenswrapper[5014]: I0228 04:51:59.951383 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79gjl\" (UniqueName: \"kubernetes.io/projected/6f39bd6f-3b06-4fbb-886d-b96e77209f53-kube-api-access-79gjl\") pod \"6f39bd6f-3b06-4fbb-886d-b96e77209f53\" (UID: \"6f39bd6f-3b06-4fbb-886d-b96e77209f53\") " Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.015086 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f39bd6f-3b06-4fbb-886d-b96e77209f53-kube-api-access-79gjl" (OuterVolumeSpecName: "kube-api-access-79gjl") pod "6f39bd6f-3b06-4fbb-886d-b96e77209f53" (UID: "6f39bd6f-3b06-4fbb-886d-b96e77209f53"). InnerVolumeSpecName "kube-api-access-79gjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.053759 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79gjl\" (UniqueName: \"kubernetes.io/projected/6f39bd6f-3b06-4fbb-886d-b96e77209f53-kube-api-access-79gjl\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.095381 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f39bd6f-3b06-4fbb-886d-b96e77209f53-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6f39bd6f-3b06-4fbb-886d-b96e77209f53" (UID: "6f39bd6f-3b06-4fbb-886d-b96e77209f53"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.098072 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f39bd6f-3b06-4fbb-886d-b96e77209f53-config" (OuterVolumeSpecName: "config") pod "6f39bd6f-3b06-4fbb-886d-b96e77209f53" (UID: "6f39bd6f-3b06-4fbb-886d-b96e77209f53"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.139743 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537572-krzx2"] Feb 28 04:52:00 crc kubenswrapper[5014]: E0228 04:52:00.140091 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f39bd6f-3b06-4fbb-886d-b96e77209f53" containerName="dnsmasq-dns" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.140108 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f39bd6f-3b06-4fbb-886d-b96e77209f53" containerName="dnsmasq-dns" Feb 28 04:52:00 crc kubenswrapper[5014]: E0228 04:52:00.140121 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f39bd6f-3b06-4fbb-886d-b96e77209f53" containerName="init" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.140128 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f39bd6f-3b06-4fbb-886d-b96e77209f53" containerName="init" Feb 28 04:52:00 crc kubenswrapper[5014]: E0228 04:52:00.140142 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6128fe0e-47ba-405d-b527-38b43d9d262c" containerName="dnsmasq-dns" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.140148 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="6128fe0e-47ba-405d-b527-38b43d9d262c" containerName="dnsmasq-dns" Feb 28 04:52:00 crc kubenswrapper[5014]: E0228 04:52:00.140167 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6128fe0e-47ba-405d-b527-38b43d9d262c" containerName="init" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.140173 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="6128fe0e-47ba-405d-b527-38b43d9d262c" containerName="init" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.140302 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f39bd6f-3b06-4fbb-886d-b96e77209f53" containerName="dnsmasq-dns" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.140319 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="6128fe0e-47ba-405d-b527-38b43d9d262c" containerName="dnsmasq-dns" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.140827 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537572-krzx2" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.142799 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.142976 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.143678 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.143762 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537572-krzx2"] Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.154695 5014 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f39bd6f-3b06-4fbb-886d-b96e77209f53-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.154735 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f39bd6f-3b06-4fbb-886d-b96e77209f53-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.257660 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n2pj\" (UniqueName: \"kubernetes.io/projected/46285c3a-9d55-4bc5-8b40-8413ca3e8a4e-kube-api-access-9n2pj\") pod \"auto-csr-approver-29537572-krzx2\" (UID: \"46285c3a-9d55-4bc5-8b40-8413ca3e8a4e\") " pod="openshift-infra/auto-csr-approver-29537572-krzx2" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.358710 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n2pj\" (UniqueName: \"kubernetes.io/projected/46285c3a-9d55-4bc5-8b40-8413ca3e8a4e-kube-api-access-9n2pj\") pod \"auto-csr-approver-29537572-krzx2\" (UID: \"46285c3a-9d55-4bc5-8b40-8413ca3e8a4e\") " pod="openshift-infra/auto-csr-approver-29537572-krzx2" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.375711 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n2pj\" (UniqueName: \"kubernetes.io/projected/46285c3a-9d55-4bc5-8b40-8413ca3e8a4e-kube-api-access-9n2pj\") pod \"auto-csr-approver-29537572-krzx2\" (UID: \"46285c3a-9d55-4bc5-8b40-8413ca3e8a4e\") " pod="openshift-infra/auto-csr-approver-29537572-krzx2" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.574782 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537572-krzx2" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.777256 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"35f1a99d-7cdf-41d2-8106-e18f5660eb1b","Type":"ContainerStarted","Data":"5bdf8ea7a06cf7abbed98ff2393b40a3dfc8611d3494f2dd07d07b7560fb5a46"} Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.777440 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.779193 5014 generic.go:334] "Generic (PLEG): container finished" podID="0fe0cfd6-ec18-4221-8722-8be777814e26" containerID="7c096662c9368c4e7ebbde93f9252fb0af118b7ecf7245d71c7fbe97229d2e58" exitCode=0 Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.779263 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" event={"ID":"0fe0cfd6-ec18-4221-8722-8be777814e26","Type":"ContainerDied","Data":"7c096662c9368c4e7ebbde93f9252fb0af118b7ecf7245d71c7fbe97229d2e58"} Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.781600 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a","Type":"ContainerStarted","Data":"0d1061f7a0ea20558bdded3c641a52419a84163e5db3bf1d2a4fd9e2cd9544e7"} Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.783657 5014 generic.go:334] "Generic (PLEG): container finished" podID="6bfcf6cb-666f-44c2-885b-f916a1e81b8f" containerID="e0f66f1f5a0a23fb2e474489605f990c5d7f13b76b4f764250127d8395ebb8da" exitCode=0 Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.783717 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" event={"ID":"6bfcf6cb-666f-44c2-885b-f916a1e81b8f","Type":"ContainerDied","Data":"e0f66f1f5a0a23fb2e474489605f990c5d7f13b76b4f764250127d8395ebb8da"} Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.786254 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9qps6" event={"ID":"02ab5d98-13ab-483d-b32b-a509bedd8ded","Type":"ContainerStarted","Data":"bd98a42f57fb48e5d7281064b00ff86c78d2ff0d51d255b1183f74ebf9af4682"} Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.787518 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"351fb773-0669-41c0-aee8-0469f34d64c9","Type":"ContainerStarted","Data":"6aa052f4b5e7c5a3ad8de9ccf2eb6301e3f49de02844097a1f59be13fb678de0"} Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.792963 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"569b1ad4-179c-4852-a5fc-509fe31df812","Type":"ContainerStarted","Data":"62a26ec650da5de1e93c6319b5b7a729e91cbe415949aada719dc3666a0b1709"} Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.814992 5014 generic.go:334] "Generic (PLEG): container finished" podID="c3f16040-f11b-405c-b332-7ee5eabac2bd" containerID="b0c4b9eb4e76c36fdcdc20b80b9e710838f98a1fcaf083adfcd4de2bd9a235d6" exitCode=0 Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.816366 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6vfgk" event={"ID":"c3f16040-f11b-405c-b332-7ee5eabac2bd","Type":"ContainerDied","Data":"b0c4b9eb4e76c36fdcdc20b80b9e710838f98a1fcaf083adfcd4de2bd9a235d6"} Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.816887 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-gn8d5" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.834515 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=9.992485416 podStartE2EDuration="18.834496214s" podCreationTimestamp="2026-02-28 04:51:42 +0000 UTC" firstStartedPulling="2026-02-28 04:51:49.7345444 +0000 UTC m=+1098.404670310" lastFinishedPulling="2026-02-28 04:51:58.576555198 +0000 UTC m=+1107.246681108" observedRunningTime="2026-02-28 04:52:00.809369626 +0000 UTC m=+1109.479495536" watchObservedRunningTime="2026-02-28 04:52:00.834496214 +0000 UTC m=+1109.504622124" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.839289 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-9qps6" podStartSLOduration=8.02519064 podStartE2EDuration="15.839271733s" podCreationTimestamp="2026-02-28 04:51:45 +0000 UTC" firstStartedPulling="2026-02-28 04:51:49.936324054 +0000 UTC m=+1098.606449964" lastFinishedPulling="2026-02-28 04:51:57.750405147 +0000 UTC m=+1106.420531057" observedRunningTime="2026-02-28 04:52:00.827020223 +0000 UTC m=+1109.497146143" watchObservedRunningTime="2026-02-28 04:52:00.839271733 +0000 UTC m=+1109.509397643" Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.988661 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-gn8d5"] Feb 28 04:52:00 crc kubenswrapper[5014]: I0228 04:52:00.996661 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-gn8d5"] Feb 28 04:52:01 crc kubenswrapper[5014]: I0228 04:52:01.761175 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-57d769cc4f-66zrm" podUID="6128fe0e-47ba-405d-b527-38b43d9d262c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.99:5353: i/o timeout" Feb 28 04:52:01 crc kubenswrapper[5014]: I0228 04:52:01.823094 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-9qps6" Feb 28 04:52:02 crc kubenswrapper[5014]: I0228 04:52:02.210757 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f39bd6f-3b06-4fbb-886d-b96e77209f53" path="/var/lib/kubelet/pods/6f39bd6f-3b06-4fbb-886d-b96e77209f53/volumes" Feb 28 04:52:02 crc kubenswrapper[5014]: I0228 04:52:02.460441 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537572-krzx2"] Feb 28 04:52:02 crc kubenswrapper[5014]: W0228 04:52:02.470544 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46285c3a_9d55_4bc5_8b40_8413ca3e8a4e.slice/crio-940ce313213b68d7014aa69e5f69e1165059c69b8659743f353936158f003584 WatchSource:0}: Error finding container 940ce313213b68d7014aa69e5f69e1165059c69b8659743f353936158f003584: Status 404 returned error can't find the container with id 940ce313213b68d7014aa69e5f69e1165059c69b8659743f353936158f003584 Feb 28 04:52:02 crc kubenswrapper[5014]: I0228 04:52:02.834423 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6vfgk" event={"ID":"c3f16040-f11b-405c-b332-7ee5eabac2bd","Type":"ContainerStarted","Data":"7fd3c189f399dcae9caa6266c8dacfeeb34b4e0bf1466e677328e7a1e829542f"} Feb 28 04:52:02 crc kubenswrapper[5014]: I0228 04:52:02.834478 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-6vfgk" event={"ID":"c3f16040-f11b-405c-b332-7ee5eabac2bd","Type":"ContainerStarted","Data":"be5f8daaab61987f5036a64f8dd7f99e78dd7b45ebc074b0a4e5bafee36fbcf6"} Feb 28 04:52:02 crc kubenswrapper[5014]: I0228 04:52:02.834696 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6vfgk" Feb 28 04:52:02 crc kubenswrapper[5014]: I0228 04:52:02.838489 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" event={"ID":"0fe0cfd6-ec18-4221-8722-8be777814e26","Type":"ContainerStarted","Data":"2be3054cec1ee9c7695500e43eadf17a93851f048687058a242443839feeeb62"} Feb 28 04:52:02 crc kubenswrapper[5014]: I0228 04:52:02.838528 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" Feb 28 04:52:02 crc kubenswrapper[5014]: I0228 04:52:02.840378 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-mgzdl" event={"ID":"43eb6c14-8ca4-41ba-9ee2-7326edcab237","Type":"ContainerStarted","Data":"15b39f8d4f9b3c55f46d10b03c01ea3bb32205bc285a1e4d99902926f7ba1797"} Feb 28 04:52:02 crc kubenswrapper[5014]: I0228 04:52:02.841855 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537572-krzx2" event={"ID":"46285c3a-9d55-4bc5-8b40-8413ca3e8a4e","Type":"ContainerStarted","Data":"940ce313213b68d7014aa69e5f69e1165059c69b8659743f353936158f003584"} Feb 28 04:52:02 crc kubenswrapper[5014]: I0228 04:52:02.844482 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" event={"ID":"6bfcf6cb-666f-44c2-885b-f916a1e81b8f","Type":"ContainerStarted","Data":"ced0c0d22ef31cfa6e340fd47580c4cd1815ff4a32d0a7c8a2e1bba831a98d57"} Feb 28 04:52:02 crc kubenswrapper[5014]: I0228 04:52:02.844610 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" Feb 28 04:52:02 crc kubenswrapper[5014]: I0228 04:52:02.846720 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"5a44d0e3-2ba4-4d6f-924b-1f516c90a11f","Type":"ContainerStarted","Data":"f502b63a3aa4c78b36ceba5ef8361b1ed31b64fda54de6fd4a5bc253603bea53"} Feb 28 04:52:02 crc kubenswrapper[5014]: I0228 04:52:02.849701 5014 generic.go:334] "Generic (PLEG): container finished" podID="ac71caa8-2f63-4b64-8d37-a1b364b62158" containerID="4ff81fad405af183be13f59f3f3f381894b6c5fd0a062f8a5f4987e418c15fd4" exitCode=0 Feb 28 04:52:02 crc kubenswrapper[5014]: I0228 04:52:02.849772 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ac71caa8-2f63-4b64-8d37-a1b364b62158","Type":"ContainerDied","Data":"4ff81fad405af183be13f59f3f3f381894b6c5fd0a062f8a5f4987e418c15fd4"} Feb 28 04:52:02 crc kubenswrapper[5014]: I0228 04:52:02.855019 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-6vfgk" podStartSLOduration=10.267406642 podStartE2EDuration="17.854998626s" podCreationTimestamp="2026-02-28 04:51:45 +0000 UTC" firstStartedPulling="2026-02-28 04:51:50.063037863 +0000 UTC m=+1098.733163773" lastFinishedPulling="2026-02-28 04:51:57.650629847 +0000 UTC m=+1106.320755757" observedRunningTime="2026-02-28 04:52:02.850911015 +0000 UTC m=+1111.521036935" watchObservedRunningTime="2026-02-28 04:52:02.854998626 +0000 UTC m=+1111.525124556" Feb 28 04:52:02 crc kubenswrapper[5014]: I0228 04:52:02.856854 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"569b1ad4-179c-4852-a5fc-509fe31df812","Type":"ContainerStarted","Data":"575953559fe8454a6d43f85ad9e1e6ff6a948ffcbf111950a1d29bf4854230c3"} Feb 28 04:52:02 crc kubenswrapper[5014]: I0228 04:52:02.875896 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=6.5659678150000005 podStartE2EDuration="18.875876888s" podCreationTimestamp="2026-02-28 04:51:44 +0000 UTC" firstStartedPulling="2026-02-28 04:51:49.94247719 +0000 UTC m=+1098.612603110" lastFinishedPulling="2026-02-28 04:52:02.252386273 +0000 UTC m=+1110.922512183" observedRunningTime="2026-02-28 04:52:02.869209159 +0000 UTC m=+1111.539335069" watchObservedRunningTime="2026-02-28 04:52:02.875876888 +0000 UTC m=+1111.546002818" Feb 28 04:52:02 crc kubenswrapper[5014]: I0228 04:52:02.895228 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-mgzdl" podStartSLOduration=9.2110338 podStartE2EDuration="12.8952054s" podCreationTimestamp="2026-02-28 04:51:50 +0000 UTC" firstStartedPulling="2026-02-28 04:51:58.460761715 +0000 UTC m=+1107.130887625" lastFinishedPulling="2026-02-28 04:52:02.144933315 +0000 UTC m=+1110.815059225" observedRunningTime="2026-02-28 04:52:02.892415604 +0000 UTC m=+1111.562541524" watchObservedRunningTime="2026-02-28 04:52:02.8952054 +0000 UTC m=+1111.565331320" Feb 28 04:52:02 crc kubenswrapper[5014]: I0228 04:52:02.936724 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" podStartSLOduration=11.936696239 podStartE2EDuration="11.936696239s" podCreationTimestamp="2026-02-28 04:51:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:52:02.930712588 +0000 UTC m=+1111.600838508" watchObservedRunningTime="2026-02-28 04:52:02.936696239 +0000 UTC m=+1111.606822159" Feb 28 04:52:02 crc kubenswrapper[5014]: I0228 04:52:02.979412 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" podStartSLOduration=11.97939203 podStartE2EDuration="11.97939203s" podCreationTimestamp="2026-02-28 04:51:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:52:02.975401423 +0000 UTC m=+1111.645527343" watchObservedRunningTime="2026-02-28 04:52:02.97939203 +0000 UTC m=+1111.649517950" Feb 28 04:52:03 crc kubenswrapper[5014]: I0228 04:52:03.013484 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.343680206 podStartE2EDuration="15.013462079s" podCreationTimestamp="2026-02-28 04:51:48 +0000 UTC" firstStartedPulling="2026-02-28 04:51:50.470825556 +0000 UTC m=+1099.140951466" lastFinishedPulling="2026-02-28 04:52:02.140607429 +0000 UTC m=+1110.810733339" observedRunningTime="2026-02-28 04:52:03.005937415 +0000 UTC m=+1111.676063335" watchObservedRunningTime="2026-02-28 04:52:03.013462079 +0000 UTC m=+1111.683587989" Feb 28 04:52:03 crc kubenswrapper[5014]: I0228 04:52:03.867523 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"ac71caa8-2f63-4b64-8d37-a1b364b62158","Type":"ContainerStarted","Data":"81b1501a532bcbf9130c8b3cb46bca758f66a4d8af2341a5194dca0476963bf1"} Feb 28 04:52:03 crc kubenswrapper[5014]: I0228 04:52:03.869876 5014 generic.go:334] "Generic (PLEG): container finished" podID="46285c3a-9d55-4bc5-8b40-8413ca3e8a4e" containerID="5397b2bb549aeb4a32e16958de7d16547652ece61311bcf11a6a1f357ea86a32" exitCode=0 Feb 28 04:52:03 crc kubenswrapper[5014]: I0228 04:52:03.869939 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537572-krzx2" event={"ID":"46285c3a-9d55-4bc5-8b40-8413ca3e8a4e","Type":"ContainerDied","Data":"5397b2bb549aeb4a32e16958de7d16547652ece61311bcf11a6a1f357ea86a32"} Feb 28 04:52:03 crc kubenswrapper[5014]: I0228 04:52:03.871699 5014 generic.go:334] "Generic (PLEG): container finished" podID="c1c70607-6183-4835-9ce6-fe3ef0d2b6fb" containerID="efe4a02cfc23275439496a991dd41ea99b1666f3a2bb408b1a3771280c7e5bb6" exitCode=0 Feb 28 04:52:03 crc kubenswrapper[5014]: I0228 04:52:03.871725 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb","Type":"ContainerDied","Data":"efe4a02cfc23275439496a991dd41ea99b1666f3a2bb408b1a3771280c7e5bb6"} Feb 28 04:52:03 crc kubenswrapper[5014]: I0228 04:52:03.872095 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-6vfgk" Feb 28 04:52:03 crc kubenswrapper[5014]: I0228 04:52:03.895738 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=17.708348818 podStartE2EDuration="25.895717273s" podCreationTimestamp="2026-02-28 04:51:38 +0000 UTC" firstStartedPulling="2026-02-28 04:51:49.594789379 +0000 UTC m=+1098.264915289" lastFinishedPulling="2026-02-28 04:51:57.782157834 +0000 UTC m=+1106.452283744" observedRunningTime="2026-02-28 04:52:03.88595731 +0000 UTC m=+1112.556083220" watchObservedRunningTime="2026-02-28 04:52:03.895717273 +0000 UTC m=+1112.565843183" Feb 28 04:52:04 crc kubenswrapper[5014]: I0228 04:52:04.366878 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 28 04:52:04 crc kubenswrapper[5014]: I0228 04:52:04.408466 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 28 04:52:04 crc kubenswrapper[5014]: I0228 04:52:04.883906 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"c1c70607-6183-4835-9ce6-fe3ef0d2b6fb","Type":"ContainerStarted","Data":"3622707ca70b1065325704b66379fde839544366f0b0a1184285e4a3176c33fb"} Feb 28 04:52:04 crc kubenswrapper[5014]: I0228 04:52:04.885428 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 28 04:52:04 crc kubenswrapper[5014]: I0228 04:52:04.909484 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=19.433642004 podStartE2EDuration="27.909455413s" podCreationTimestamp="2026-02-28 04:51:37 +0000 UTC" firstStartedPulling="2026-02-28 04:51:49.267364894 +0000 UTC m=+1097.937490804" lastFinishedPulling="2026-02-28 04:51:57.743178303 +0000 UTC m=+1106.413304213" observedRunningTime="2026-02-28 04:52:04.906646447 +0000 UTC m=+1113.576772367" watchObservedRunningTime="2026-02-28 04:52:04.909455413 +0000 UTC m=+1113.579581363" Feb 28 04:52:04 crc kubenswrapper[5014]: I0228 04:52:04.939125 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 28 04:52:04 crc kubenswrapper[5014]: I0228 04:52:04.961620 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 28 04:52:04 crc kubenswrapper[5014]: I0228 04:52:04.963238 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 28 04:52:05 crc kubenswrapper[5014]: I0228 04:52:05.025967 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 28 04:52:05 crc kubenswrapper[5014]: I0228 04:52:05.245762 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537572-krzx2" Feb 28 04:52:05 crc kubenswrapper[5014]: I0228 04:52:05.343103 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n2pj\" (UniqueName: \"kubernetes.io/projected/46285c3a-9d55-4bc5-8b40-8413ca3e8a4e-kube-api-access-9n2pj\") pod \"46285c3a-9d55-4bc5-8b40-8413ca3e8a4e\" (UID: \"46285c3a-9d55-4bc5-8b40-8413ca3e8a4e\") " Feb 28 04:52:05 crc kubenswrapper[5014]: I0228 04:52:05.348888 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46285c3a-9d55-4bc5-8b40-8413ca3e8a4e-kube-api-access-9n2pj" (OuterVolumeSpecName: "kube-api-access-9n2pj") pod "46285c3a-9d55-4bc5-8b40-8413ca3e8a4e" (UID: "46285c3a-9d55-4bc5-8b40-8413ca3e8a4e"). InnerVolumeSpecName "kube-api-access-9n2pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:52:05 crc kubenswrapper[5014]: I0228 04:52:05.445465 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n2pj\" (UniqueName: \"kubernetes.io/projected/46285c3a-9d55-4bc5-8b40-8413ca3e8a4e-kube-api-access-9n2pj\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:05 crc kubenswrapper[5014]: I0228 04:52:05.688416 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 28 04:52:05 crc kubenswrapper[5014]: I0228 04:52:05.896277 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537572-krzx2" event={"ID":"46285c3a-9d55-4bc5-8b40-8413ca3e8a4e","Type":"ContainerDied","Data":"940ce313213b68d7014aa69e5f69e1165059c69b8659743f353936158f003584"} Feb 28 04:52:05 crc kubenswrapper[5014]: I0228 04:52:05.896324 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="940ce313213b68d7014aa69e5f69e1165059c69b8659743f353936158f003584" Feb 28 04:52:05 crc kubenswrapper[5014]: I0228 04:52:05.896563 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537572-krzx2" Feb 28 04:52:05 crc kubenswrapper[5014]: I0228 04:52:05.947172 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.139472 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 28 04:52:06 crc kubenswrapper[5014]: E0228 04:52:06.139900 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46285c3a-9d55-4bc5-8b40-8413ca3e8a4e" containerName="oc" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.139919 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="46285c3a-9d55-4bc5-8b40-8413ca3e8a4e" containerName="oc" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.140165 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="46285c3a-9d55-4bc5-8b40-8413ca3e8a4e" containerName="oc" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.144703 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.147165 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.147177 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.147356 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-tfnsn" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.149080 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.149295 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.159246 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22702874-a9ba-4491-aed2-5ef93384150c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"22702874-a9ba-4491-aed2-5ef93384150c\") " pod="openstack/ovn-northd-0" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.159306 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22702874-a9ba-4491-aed2-5ef93384150c-scripts\") pod \"ovn-northd-0\" (UID: \"22702874-a9ba-4491-aed2-5ef93384150c\") " pod="openstack/ovn-northd-0" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.159391 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22702874-a9ba-4491-aed2-5ef93384150c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"22702874-a9ba-4491-aed2-5ef93384150c\") " pod="openstack/ovn-northd-0" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.159429 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22702874-a9ba-4491-aed2-5ef93384150c-config\") pod \"ovn-northd-0\" (UID: \"22702874-a9ba-4491-aed2-5ef93384150c\") " pod="openstack/ovn-northd-0" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.159516 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22702874-a9ba-4491-aed2-5ef93384150c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"22702874-a9ba-4491-aed2-5ef93384150c\") " pod="openstack/ovn-northd-0" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.159555 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22702874-a9ba-4491-aed2-5ef93384150c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"22702874-a9ba-4491-aed2-5ef93384150c\") " pod="openstack/ovn-northd-0" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.159582 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dww49\" (UniqueName: \"kubernetes.io/projected/22702874-a9ba-4491-aed2-5ef93384150c-kube-api-access-dww49\") pod \"ovn-northd-0\" (UID: \"22702874-a9ba-4491-aed2-5ef93384150c\") " pod="openstack/ovn-northd-0" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.260424 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22702874-a9ba-4491-aed2-5ef93384150c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"22702874-a9ba-4491-aed2-5ef93384150c\") " pod="openstack/ovn-northd-0" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.260491 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22702874-a9ba-4491-aed2-5ef93384150c-scripts\") pod \"ovn-northd-0\" (UID: \"22702874-a9ba-4491-aed2-5ef93384150c\") " pod="openstack/ovn-northd-0" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.260557 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22702874-a9ba-4491-aed2-5ef93384150c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"22702874-a9ba-4491-aed2-5ef93384150c\") " pod="openstack/ovn-northd-0" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.260595 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22702874-a9ba-4491-aed2-5ef93384150c-config\") pod \"ovn-northd-0\" (UID: \"22702874-a9ba-4491-aed2-5ef93384150c\") " pod="openstack/ovn-northd-0" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.260623 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22702874-a9ba-4491-aed2-5ef93384150c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"22702874-a9ba-4491-aed2-5ef93384150c\") " pod="openstack/ovn-northd-0" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.260642 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22702874-a9ba-4491-aed2-5ef93384150c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"22702874-a9ba-4491-aed2-5ef93384150c\") " pod="openstack/ovn-northd-0" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.260665 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dww49\" (UniqueName: \"kubernetes.io/projected/22702874-a9ba-4491-aed2-5ef93384150c-kube-api-access-dww49\") pod \"ovn-northd-0\" (UID: \"22702874-a9ba-4491-aed2-5ef93384150c\") " pod="openstack/ovn-northd-0" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.262569 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22702874-a9ba-4491-aed2-5ef93384150c-config\") pod \"ovn-northd-0\" (UID: \"22702874-a9ba-4491-aed2-5ef93384150c\") " pod="openstack/ovn-northd-0" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.263304 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22702874-a9ba-4491-aed2-5ef93384150c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"22702874-a9ba-4491-aed2-5ef93384150c\") " pod="openstack/ovn-northd-0" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.266920 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22702874-a9ba-4491-aed2-5ef93384150c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"22702874-a9ba-4491-aed2-5ef93384150c\") " pod="openstack/ovn-northd-0" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.268618 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22702874-a9ba-4491-aed2-5ef93384150c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"22702874-a9ba-4491-aed2-5ef93384150c\") " pod="openstack/ovn-northd-0" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.272198 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22702874-a9ba-4491-aed2-5ef93384150c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"22702874-a9ba-4491-aed2-5ef93384150c\") " pod="openstack/ovn-northd-0" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.273331 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22702874-a9ba-4491-aed2-5ef93384150c-scripts\") pod \"ovn-northd-0\" (UID: \"22702874-a9ba-4491-aed2-5ef93384150c\") " pod="openstack/ovn-northd-0" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.292213 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dww49\" (UniqueName: \"kubernetes.io/projected/22702874-a9ba-4491-aed2-5ef93384150c-kube-api-access-dww49\") pod \"ovn-northd-0\" (UID: \"22702874-a9ba-4491-aed2-5ef93384150c\") " pod="openstack/ovn-northd-0" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.330372 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537566-7xlww"] Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.342983 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537566-7xlww"] Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.461136 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 28 04:52:06 crc kubenswrapper[5014]: I0228 04:52:06.930583 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 28 04:52:07 crc kubenswrapper[5014]: I0228 04:52:07.915067 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22702874-a9ba-4491-aed2-5ef93384150c","Type":"ContainerStarted","Data":"9d766642b78c3ea1db08377e7581d8e24c9a195f9077e4fe9f6f2c555e8c3286"} Feb 28 04:52:08 crc kubenswrapper[5014]: I0228 04:52:08.188994 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9af98cd8-9086-42ab-833a-2eb0d1fb73d5" path="/var/lib/kubelet/pods/9af98cd8-9086-42ab-833a-2eb0d1fb73d5/volumes" Feb 28 04:52:08 crc kubenswrapper[5014]: I0228 04:52:08.998852 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 28 04:52:08 crc kubenswrapper[5014]: I0228 04:52:08.998952 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 28 04:52:10 crc kubenswrapper[5014]: I0228 04:52:10.367796 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 28 04:52:10 crc kubenswrapper[5014]: I0228 04:52:10.368209 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 28 04:52:11 crc kubenswrapper[5014]: I0228 04:52:11.453117 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" Feb 28 04:52:11 crc kubenswrapper[5014]: I0228 04:52:11.602017 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" Feb 28 04:52:11 crc kubenswrapper[5014]: I0228 04:52:11.649057 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-hsmzr"] Feb 28 04:52:11 crc kubenswrapper[5014]: I0228 04:52:11.952917 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" podUID="0fe0cfd6-ec18-4221-8722-8be777814e26" containerName="dnsmasq-dns" containerID="cri-o://2be3054cec1ee9c7695500e43eadf17a93851f048687058a242443839feeeb62" gracePeriod=10 Feb 28 04:52:12 crc kubenswrapper[5014]: I0228 04:52:12.906216 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 28 04:52:12 crc kubenswrapper[5014]: I0228 04:52:12.959035 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-kgdjz"] Feb 28 04:52:12 crc kubenswrapper[5014]: I0228 04:52:12.960454 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-kgdjz" Feb 28 04:52:12 crc kubenswrapper[5014]: I0228 04:52:12.974047 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-kgdjz"] Feb 28 04:52:13 crc kubenswrapper[5014]: I0228 04:52:13.082891 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b1dde17-8b85-45c0-bef3-a9439be5632e-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-kgdjz\" (UID: \"8b1dde17-8b85-45c0-bef3-a9439be5632e\") " pod="openstack/dnsmasq-dns-698758b865-kgdjz" Feb 28 04:52:13 crc kubenswrapper[5014]: I0228 04:52:13.082952 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1dde17-8b85-45c0-bef3-a9439be5632e-config\") pod \"dnsmasq-dns-698758b865-kgdjz\" (UID: \"8b1dde17-8b85-45c0-bef3-a9439be5632e\") " pod="openstack/dnsmasq-dns-698758b865-kgdjz" Feb 28 04:52:13 crc kubenswrapper[5014]: I0228 04:52:13.082982 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b1dde17-8b85-45c0-bef3-a9439be5632e-dns-svc\") pod \"dnsmasq-dns-698758b865-kgdjz\" (UID: \"8b1dde17-8b85-45c0-bef3-a9439be5632e\") " pod="openstack/dnsmasq-dns-698758b865-kgdjz" Feb 28 04:52:13 crc kubenswrapper[5014]: I0228 04:52:13.083202 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdphb\" (UniqueName: \"kubernetes.io/projected/8b1dde17-8b85-45c0-bef3-a9439be5632e-kube-api-access-qdphb\") pod \"dnsmasq-dns-698758b865-kgdjz\" (UID: \"8b1dde17-8b85-45c0-bef3-a9439be5632e\") " pod="openstack/dnsmasq-dns-698758b865-kgdjz" Feb 28 04:52:13 crc kubenswrapper[5014]: I0228 04:52:13.083262 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b1dde17-8b85-45c0-bef3-a9439be5632e-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-kgdjz\" (UID: \"8b1dde17-8b85-45c0-bef3-a9439be5632e\") " pod="openstack/dnsmasq-dns-698758b865-kgdjz" Feb 28 04:52:13 crc kubenswrapper[5014]: I0228 04:52:13.184653 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdphb\" (UniqueName: \"kubernetes.io/projected/8b1dde17-8b85-45c0-bef3-a9439be5632e-kube-api-access-qdphb\") pod \"dnsmasq-dns-698758b865-kgdjz\" (UID: \"8b1dde17-8b85-45c0-bef3-a9439be5632e\") " pod="openstack/dnsmasq-dns-698758b865-kgdjz" Feb 28 04:52:13 crc kubenswrapper[5014]: I0228 04:52:13.184698 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b1dde17-8b85-45c0-bef3-a9439be5632e-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-kgdjz\" (UID: \"8b1dde17-8b85-45c0-bef3-a9439be5632e\") " pod="openstack/dnsmasq-dns-698758b865-kgdjz" Feb 28 04:52:13 crc kubenswrapper[5014]: I0228 04:52:13.185610 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b1dde17-8b85-45c0-bef3-a9439be5632e-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-kgdjz\" (UID: \"8b1dde17-8b85-45c0-bef3-a9439be5632e\") " pod="openstack/dnsmasq-dns-698758b865-kgdjz" Feb 28 04:52:13 crc kubenswrapper[5014]: I0228 04:52:13.185675 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b1dde17-8b85-45c0-bef3-a9439be5632e-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-kgdjz\" (UID: \"8b1dde17-8b85-45c0-bef3-a9439be5632e\") " pod="openstack/dnsmasq-dns-698758b865-kgdjz" Feb 28 04:52:13 crc kubenswrapper[5014]: I0228 04:52:13.186389 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b1dde17-8b85-45c0-bef3-a9439be5632e-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-kgdjz\" (UID: \"8b1dde17-8b85-45c0-bef3-a9439be5632e\") " pod="openstack/dnsmasq-dns-698758b865-kgdjz" Feb 28 04:52:13 crc kubenswrapper[5014]: I0228 04:52:13.185712 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1dde17-8b85-45c0-bef3-a9439be5632e-config\") pod \"dnsmasq-dns-698758b865-kgdjz\" (UID: \"8b1dde17-8b85-45c0-bef3-a9439be5632e\") " pod="openstack/dnsmasq-dns-698758b865-kgdjz" Feb 28 04:52:13 crc kubenswrapper[5014]: I0228 04:52:13.186514 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b1dde17-8b85-45c0-bef3-a9439be5632e-dns-svc\") pod \"dnsmasq-dns-698758b865-kgdjz\" (UID: \"8b1dde17-8b85-45c0-bef3-a9439be5632e\") " pod="openstack/dnsmasq-dns-698758b865-kgdjz" Feb 28 04:52:13 crc kubenswrapper[5014]: I0228 04:52:13.186618 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1dde17-8b85-45c0-bef3-a9439be5632e-config\") pod \"dnsmasq-dns-698758b865-kgdjz\" (UID: \"8b1dde17-8b85-45c0-bef3-a9439be5632e\") " pod="openstack/dnsmasq-dns-698758b865-kgdjz" Feb 28 04:52:13 crc kubenswrapper[5014]: I0228 04:52:13.187202 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b1dde17-8b85-45c0-bef3-a9439be5632e-dns-svc\") pod \"dnsmasq-dns-698758b865-kgdjz\" (UID: \"8b1dde17-8b85-45c0-bef3-a9439be5632e\") " pod="openstack/dnsmasq-dns-698758b865-kgdjz" Feb 28 04:52:13 crc kubenswrapper[5014]: I0228 04:52:13.203765 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdphb\" (UniqueName: \"kubernetes.io/projected/8b1dde17-8b85-45c0-bef3-a9439be5632e-kube-api-access-qdphb\") pod \"dnsmasq-dns-698758b865-kgdjz\" (UID: \"8b1dde17-8b85-45c0-bef3-a9439be5632e\") " pod="openstack/dnsmasq-dns-698758b865-kgdjz" Feb 28 04:52:13 crc kubenswrapper[5014]: I0228 04:52:13.319066 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-kgdjz" Feb 28 04:52:13 crc kubenswrapper[5014]: I0228 04:52:13.803727 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-kgdjz"] Feb 28 04:52:13 crc kubenswrapper[5014]: I0228 04:52:13.977082 5014 generic.go:334] "Generic (PLEG): container finished" podID="0fe0cfd6-ec18-4221-8722-8be777814e26" containerID="2be3054cec1ee9c7695500e43eadf17a93851f048687058a242443839feeeb62" exitCode=0 Feb 28 04:52:13 crc kubenswrapper[5014]: I0228 04:52:13.977152 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" event={"ID":"0fe0cfd6-ec18-4221-8722-8be777814e26","Type":"ContainerDied","Data":"2be3054cec1ee9c7695500e43eadf17a93851f048687058a242443839feeeb62"} Feb 28 04:52:13 crc kubenswrapper[5014]: I0228 04:52:13.978352 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-kgdjz" event={"ID":"8b1dde17-8b85-45c0-bef3-a9439be5632e","Type":"ContainerStarted","Data":"2dbaa08081bda4d23cae5a7e1258718b564bed8bbd9ac1a8e7c8d5722e782918"} Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.050125 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.057488 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.064725 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.064943 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-9fhmt" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.065633 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.066083 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.085674 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.205304 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2998e28e-fceb-4daa-a26c-74bffeba0d8f-cache\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") " pod="openstack/swift-storage-0" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.205362 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mftwl\" (UniqueName: \"kubernetes.io/projected/2998e28e-fceb-4daa-a26c-74bffeba0d8f-kube-api-access-mftwl\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") " pod="openstack/swift-storage-0" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.205498 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2998e28e-fceb-4daa-a26c-74bffeba0d8f-etc-swift\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") " pod="openstack/swift-storage-0" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.205666 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2998e28e-fceb-4daa-a26c-74bffeba0d8f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") " pod="openstack/swift-storage-0" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.205823 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") " pod="openstack/swift-storage-0" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.205950 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2998e28e-fceb-4daa-a26c-74bffeba0d8f-lock\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") " pod="openstack/swift-storage-0" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.307782 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2998e28e-fceb-4daa-a26c-74bffeba0d8f-etc-swift\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") " pod="openstack/swift-storage-0" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.308112 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2998e28e-fceb-4daa-a26c-74bffeba0d8f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") " pod="openstack/swift-storage-0" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.308158 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") " pod="openstack/swift-storage-0" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.308199 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2998e28e-fceb-4daa-a26c-74bffeba0d8f-lock\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") " pod="openstack/swift-storage-0" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.308226 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2998e28e-fceb-4daa-a26c-74bffeba0d8f-cache\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") " pod="openstack/swift-storage-0" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.308245 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mftwl\" (UniqueName: \"kubernetes.io/projected/2998e28e-fceb-4daa-a26c-74bffeba0d8f-kube-api-access-mftwl\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") " pod="openstack/swift-storage-0" Feb 28 04:52:14 crc kubenswrapper[5014]: E0228 04:52:14.308589 5014 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 28 04:52:14 crc kubenswrapper[5014]: E0228 04:52:14.308607 5014 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 28 04:52:14 crc kubenswrapper[5014]: E0228 04:52:14.308647 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2998e28e-fceb-4daa-a26c-74bffeba0d8f-etc-swift podName:2998e28e-fceb-4daa-a26c-74bffeba0d8f nodeName:}" failed. No retries permitted until 2026-02-28 04:52:14.808631542 +0000 UTC m=+1123.478757442 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2998e28e-fceb-4daa-a26c-74bffeba0d8f-etc-swift") pod "swift-storage-0" (UID: "2998e28e-fceb-4daa-a26c-74bffeba0d8f") : configmap "swift-ring-files" not found Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.309537 5014 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/swift-storage-0" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.309587 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/2998e28e-fceb-4daa-a26c-74bffeba0d8f-lock\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") " pod="openstack/swift-storage-0" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.309937 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/2998e28e-fceb-4daa-a26c-74bffeba0d8f-cache\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") " pod="openstack/swift-storage-0" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.331653 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2998e28e-fceb-4daa-a26c-74bffeba0d8f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") " pod="openstack/swift-storage-0" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.331693 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mftwl\" (UniqueName: \"kubernetes.io/projected/2998e28e-fceb-4daa-a26c-74bffeba0d8f-kube-api-access-mftwl\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") " pod="openstack/swift-storage-0" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.335868 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") " pod="openstack/swift-storage-0" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.399285 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-dn9mn"] Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.400203 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.401785 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.402358 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.402469 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.414082 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dn9mn"] Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.511473 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c6e56b-a312-43c9-b627-af4138518fe4-combined-ca-bundle\") pod \"swift-ring-rebalance-dn9mn\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.511560 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/15c6e56b-a312-43c9-b627-af4138518fe4-swiftconf\") pod \"swift-ring-rebalance-dn9mn\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.511577 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/15c6e56b-a312-43c9-b627-af4138518fe4-etc-swift\") pod \"swift-ring-rebalance-dn9mn\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.511594 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/15c6e56b-a312-43c9-b627-af4138518fe4-ring-data-devices\") pod \"swift-ring-rebalance-dn9mn\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.511616 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15c6e56b-a312-43c9-b627-af4138518fe4-scripts\") pod \"swift-ring-rebalance-dn9mn\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.511711 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2dqt\" (UniqueName: \"kubernetes.io/projected/15c6e56b-a312-43c9-b627-af4138518fe4-kube-api-access-x2dqt\") pod \"swift-ring-rebalance-dn9mn\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.511744 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/15c6e56b-a312-43c9-b627-af4138518fe4-dispersionconf\") pod \"swift-ring-rebalance-dn9mn\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.613044 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2dqt\" (UniqueName: \"kubernetes.io/projected/15c6e56b-a312-43c9-b627-af4138518fe4-kube-api-access-x2dqt\") pod \"swift-ring-rebalance-dn9mn\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.613126 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/15c6e56b-a312-43c9-b627-af4138518fe4-dispersionconf\") pod \"swift-ring-rebalance-dn9mn\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.613287 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c6e56b-a312-43c9-b627-af4138518fe4-combined-ca-bundle\") pod \"swift-ring-rebalance-dn9mn\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.613329 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/15c6e56b-a312-43c9-b627-af4138518fe4-swiftconf\") pod \"swift-ring-rebalance-dn9mn\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.613362 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/15c6e56b-a312-43c9-b627-af4138518fe4-etc-swift\") pod \"swift-ring-rebalance-dn9mn\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.613394 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/15c6e56b-a312-43c9-b627-af4138518fe4-ring-data-devices\") pod \"swift-ring-rebalance-dn9mn\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.613424 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15c6e56b-a312-43c9-b627-af4138518fe4-scripts\") pod \"swift-ring-rebalance-dn9mn\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.614632 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15c6e56b-a312-43c9-b627-af4138518fe4-scripts\") pod \"swift-ring-rebalance-dn9mn\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.614640 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/15c6e56b-a312-43c9-b627-af4138518fe4-ring-data-devices\") pod \"swift-ring-rebalance-dn9mn\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.614724 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/15c6e56b-a312-43c9-b627-af4138518fe4-etc-swift\") pod \"swift-ring-rebalance-dn9mn\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.621191 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/15c6e56b-a312-43c9-b627-af4138518fe4-dispersionconf\") pod \"swift-ring-rebalance-dn9mn\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.621235 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/15c6e56b-a312-43c9-b627-af4138518fe4-swiftconf\") pod \"swift-ring-rebalance-dn9mn\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.621761 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c6e56b-a312-43c9-b627-af4138518fe4-combined-ca-bundle\") pod \"swift-ring-rebalance-dn9mn\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.641992 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2dqt\" (UniqueName: \"kubernetes.io/projected/15c6e56b-a312-43c9-b627-af4138518fe4-kube-api-access-x2dqt\") pod \"swift-ring-rebalance-dn9mn\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.724557 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.816882 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2998e28e-fceb-4daa-a26c-74bffeba0d8f-etc-swift\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") " pod="openstack/swift-storage-0" Feb 28 04:52:14 crc kubenswrapper[5014]: E0228 04:52:14.817384 5014 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 28 04:52:14 crc kubenswrapper[5014]: E0228 04:52:14.817402 5014 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 28 04:52:14 crc kubenswrapper[5014]: E0228 04:52:14.817464 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2998e28e-fceb-4daa-a26c-74bffeba0d8f-etc-swift podName:2998e28e-fceb-4daa-a26c-74bffeba0d8f nodeName:}" failed. No retries permitted until 2026-02-28 04:52:15.817447034 +0000 UTC m=+1124.487572954 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2998e28e-fceb-4daa-a26c-74bffeba0d8f-etc-swift") pod "swift-storage-0" (UID: "2998e28e-fceb-4daa-a26c-74bffeba0d8f") : configmap "swift-ring-files" not found Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.996408 5014 generic.go:334] "Generic (PLEG): container finished" podID="8b1dde17-8b85-45c0-bef3-a9439be5632e" containerID="8189ca449b58c04c5146ce79f4860ca822744669d903d6eb3bd5c6f0130218b8" exitCode=0 Feb 28 04:52:14 crc kubenswrapper[5014]: I0228 04:52:14.996453 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-kgdjz" event={"ID":"8b1dde17-8b85-45c0-bef3-a9439be5632e","Type":"ContainerDied","Data":"8189ca449b58c04c5146ce79f4860ca822744669d903d6eb3bd5c6f0130218b8"} Feb 28 04:52:15 crc kubenswrapper[5014]: I0228 04:52:15.208313 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-dn9mn"] Feb 28 04:52:15 crc kubenswrapper[5014]: W0228 04:52:15.215434 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15c6e56b_a312_43c9_b627_af4138518fe4.slice/crio-77b56616ed923103e2bf7ebf1bc96046c1d726d6acf77156e064ee2d8b068294 WatchSource:0}: Error finding container 77b56616ed923103e2bf7ebf1bc96046c1d726d6acf77156e064ee2d8b068294: Status 404 returned error can't find the container with id 77b56616ed923103e2bf7ebf1bc96046c1d726d6acf77156e064ee2d8b068294 Feb 28 04:52:15 crc kubenswrapper[5014]: I0228 04:52:15.835131 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2998e28e-fceb-4daa-a26c-74bffeba0d8f-etc-swift\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") " pod="openstack/swift-storage-0" Feb 28 04:52:15 crc kubenswrapper[5014]: E0228 04:52:15.835371 5014 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 28 04:52:15 crc kubenswrapper[5014]: E0228 04:52:15.835413 5014 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 28 04:52:15 crc kubenswrapper[5014]: E0228 04:52:15.835482 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2998e28e-fceb-4daa-a26c-74bffeba0d8f-etc-swift podName:2998e28e-fceb-4daa-a26c-74bffeba0d8f nodeName:}" failed. No retries permitted until 2026-02-28 04:52:17.835461839 +0000 UTC m=+1126.505587749 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2998e28e-fceb-4daa-a26c-74bffeba0d8f-etc-swift") pod "swift-storage-0" (UID: "2998e28e-fceb-4daa-a26c-74bffeba0d8f") : configmap "swift-ring-files" not found Feb 28 04:52:16 crc kubenswrapper[5014]: I0228 04:52:16.006867 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-kgdjz" event={"ID":"8b1dde17-8b85-45c0-bef3-a9439be5632e","Type":"ContainerStarted","Data":"b57907d1126245cbfcac823eeb4015387e57afb28aabba87c1cf311a841e1879"} Feb 28 04:52:16 crc kubenswrapper[5014]: I0228 04:52:16.007199 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-kgdjz" Feb 28 04:52:16 crc kubenswrapper[5014]: I0228 04:52:16.008863 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dn9mn" event={"ID":"15c6e56b-a312-43c9-b627-af4138518fe4","Type":"ContainerStarted","Data":"77b56616ed923103e2bf7ebf1bc96046c1d726d6acf77156e064ee2d8b068294"} Feb 28 04:52:16 crc kubenswrapper[5014]: I0228 04:52:16.046719 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-kgdjz" podStartSLOduration=4.046700917 podStartE2EDuration="4.046700917s" podCreationTimestamp="2026-02-28 04:52:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:52:16.043757726 +0000 UTC m=+1124.713883636" watchObservedRunningTime="2026-02-28 04:52:16.046700917 +0000 UTC m=+1124.716826837" Feb 28 04:52:17 crc kubenswrapper[5014]: I0228 04:52:17.869444 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2998e28e-fceb-4daa-a26c-74bffeba0d8f-etc-swift\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") " pod="openstack/swift-storage-0" Feb 28 04:52:17 crc kubenswrapper[5014]: E0228 04:52:17.869696 5014 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 28 04:52:17 crc kubenswrapper[5014]: E0228 04:52:17.870219 5014 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 28 04:52:17 crc kubenswrapper[5014]: E0228 04:52:17.870298 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2998e28e-fceb-4daa-a26c-74bffeba0d8f-etc-swift podName:2998e28e-fceb-4daa-a26c-74bffeba0d8f nodeName:}" failed. No retries permitted until 2026-02-28 04:52:21.870274797 +0000 UTC m=+1130.540400727 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2998e28e-fceb-4daa-a26c-74bffeba0d8f-etc-swift") pod "swift-storage-0" (UID: "2998e28e-fceb-4daa-a26c-74bffeba0d8f") : configmap "swift-ring-files" not found Feb 28 04:52:19 crc kubenswrapper[5014]: I0228 04:52:19.133725 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" Feb 28 04:52:19 crc kubenswrapper[5014]: I0228 04:52:19.193124 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fe0cfd6-ec18-4221-8722-8be777814e26-config\") pod \"0fe0cfd6-ec18-4221-8722-8be777814e26\" (UID: \"0fe0cfd6-ec18-4221-8722-8be777814e26\") " Feb 28 04:52:19 crc kubenswrapper[5014]: I0228 04:52:19.193215 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fe0cfd6-ec18-4221-8722-8be777814e26-dns-svc\") pod \"0fe0cfd6-ec18-4221-8722-8be777814e26\" (UID: \"0fe0cfd6-ec18-4221-8722-8be777814e26\") " Feb 28 04:52:19 crc kubenswrapper[5014]: I0228 04:52:19.193271 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8qjm\" (UniqueName: \"kubernetes.io/projected/0fe0cfd6-ec18-4221-8722-8be777814e26-kube-api-access-b8qjm\") pod \"0fe0cfd6-ec18-4221-8722-8be777814e26\" (UID: \"0fe0cfd6-ec18-4221-8722-8be777814e26\") " Feb 28 04:52:19 crc kubenswrapper[5014]: I0228 04:52:19.193316 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fe0cfd6-ec18-4221-8722-8be777814e26-ovsdbserver-sb\") pod \"0fe0cfd6-ec18-4221-8722-8be777814e26\" (UID: \"0fe0cfd6-ec18-4221-8722-8be777814e26\") " Feb 28 04:52:19 crc kubenswrapper[5014]: I0228 04:52:19.201601 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fe0cfd6-ec18-4221-8722-8be777814e26-kube-api-access-b8qjm" (OuterVolumeSpecName: "kube-api-access-b8qjm") pod "0fe0cfd6-ec18-4221-8722-8be777814e26" (UID: "0fe0cfd6-ec18-4221-8722-8be777814e26"). InnerVolumeSpecName "kube-api-access-b8qjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:52:19 crc kubenswrapper[5014]: I0228 04:52:19.240139 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fe0cfd6-ec18-4221-8722-8be777814e26-config" (OuterVolumeSpecName: "config") pod "0fe0cfd6-ec18-4221-8722-8be777814e26" (UID: "0fe0cfd6-ec18-4221-8722-8be777814e26"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:19 crc kubenswrapper[5014]: I0228 04:52:19.241351 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fe0cfd6-ec18-4221-8722-8be777814e26-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0fe0cfd6-ec18-4221-8722-8be777814e26" (UID: "0fe0cfd6-ec18-4221-8722-8be777814e26"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:19 crc kubenswrapper[5014]: I0228 04:52:19.252993 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fe0cfd6-ec18-4221-8722-8be777814e26-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0fe0cfd6-ec18-4221-8722-8be777814e26" (UID: "0fe0cfd6-ec18-4221-8722-8be777814e26"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:19 crc kubenswrapper[5014]: I0228 04:52:19.297307 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8qjm\" (UniqueName: \"kubernetes.io/projected/0fe0cfd6-ec18-4221-8722-8be777814e26-kube-api-access-b8qjm\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:19 crc kubenswrapper[5014]: I0228 04:52:19.297365 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0fe0cfd6-ec18-4221-8722-8be777814e26-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:19 crc kubenswrapper[5014]: I0228 04:52:19.297384 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0fe0cfd6-ec18-4221-8722-8be777814e26-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:19 crc kubenswrapper[5014]: I0228 04:52:19.297401 5014 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0fe0cfd6-ec18-4221-8722-8be777814e26-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:20 crc kubenswrapper[5014]: I0228 04:52:20.043655 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" event={"ID":"0fe0cfd6-ec18-4221-8722-8be777814e26","Type":"ContainerDied","Data":"85ed0c890151727149c158feb460fe8f1416c5f345cf1c807d2a2b5cd4e1c1b4"} Feb 28 04:52:20 crc kubenswrapper[5014]: I0228 04:52:20.043772 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" Feb 28 04:52:20 crc kubenswrapper[5014]: I0228 04:52:20.043992 5014 scope.go:117] "RemoveContainer" containerID="2be3054cec1ee9c7695500e43eadf17a93851f048687058a242443839feeeb62" Feb 28 04:52:20 crc kubenswrapper[5014]: I0228 04:52:20.076116 5014 scope.go:117] "RemoveContainer" containerID="7c096662c9368c4e7ebbde93f9252fb0af118b7ecf7245d71c7fbe97229d2e58" Feb 28 04:52:20 crc kubenswrapper[5014]: I0228 04:52:20.077617 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-hsmzr"] Feb 28 04:52:20 crc kubenswrapper[5014]: I0228 04:52:20.094061 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-hsmzr"] Feb 28 04:52:20 crc kubenswrapper[5014]: I0228 04:52:20.182841 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fe0cfd6-ec18-4221-8722-8be777814e26" path="/var/lib/kubelet/pods/0fe0cfd6-ec18-4221-8722-8be777814e26/volumes" Feb 28 04:52:20 crc kubenswrapper[5014]: E0228 04:52:20.936378 5014 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1438120769/1\": happened during read: context canceled" image="quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified" Feb 28 04:52:20 crc kubenswrapper[5014]: E0228 04:52:20.936655 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-northd,Image:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,Command:[/usr/bin/ovn-northd],Args:[-vfile:off -vconsole:info --n-threads=1 --ovnnb-db=ssl:ovsdbserver-nb-0.openstack.svc.cluster.local:6641 --ovnsb-db=ssl:ovsdbserver-sb-0.openstack.svc.cluster.local:6642 --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n65bh87h689h58dh586h5dch77h6h58h68fh546h5fch685h5c8h5bh5fdh55h67bh698h65bh68h64bh556h687hdfh557h59fh699h575h596h645h64q,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:certs,Value:n58bh5fhb7h75h569h648h8bhd8h55fh59h55bh67ch5ffh654h5b9h54dh695h688hb7h55dh55chd9h5ch658h56h549h9chb5h665h55fh56fh595q,ValueFrom:nil,},EnvVar{Name:certs_metrics,Value:n684h546h9dh698h569hfh5c5h66ch5d7h5fbh68hf7h5cfh8fh8chffh57bh59dhc7h544h69h5c5h5d6h56bh68fhcfh574h694h54h5b9hcch56fq,ValueFrom:nil,},EnvVar{Name:ovnnorthd-config,Value:n5c8h7ch56bh8dh8hc4h5dch9dh68h6bhb7h598h549h5dbh66fh6bh5b4h5cch5d6h55ch57fhfch588h89h5ddh5d6h65bh65bh8dhc4h67dh569q,ValueFrom:nil,},EnvVar{Name:ovnnorthd-scripts,Value:n664hd8h66ch58dh64hc9h66bhd4h558h697h67bh557hdch664h567h669h555h696h556h556h5fh5bh569hbh665h9dh4h9bh564hc8h5b7h5c4q,ValueFrom:nil,},EnvVar{Name:tls-ca-bundle.pem,Value:n5dchffh557h54h658h4h66dh7bh568hb8hfdh6bh657h7fh5c5h5bbhb8h56h98h684h77h684h5dch589h59h5d9h68ch655h679h669h94h5c6q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-northd-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dww49,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/status_check.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/status_check.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:15,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-northd-0_openstack(22702874-a9ba-4491-aed2-5ef93384150c): ErrImagePull: rpc error: code = Canceled desc = writing blob: storing blob to file \"/var/tmp/container_images_storage1438120769/1\": happened during read: context canceled" logger="UnhandledError" Feb 28 04:52:21 crc kubenswrapper[5014]: E0228 04:52:21.192906 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ErrImagePull: \"rpc error: code = Canceled desc = writing blob: storing blob to file \\\"/var/tmp/container_images_storage1438120769/1\\\": happened during read: context canceled\"" pod="openstack/ovn-northd-0" podUID="22702874-a9ba-4491-aed2-5ef93384150c" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.203562 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.304213 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.452530 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7f896c8c65-hsmzr" podUID="0fe0cfd6-ec18-4221-8722-8be777814e26" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.111:5353: i/o timeout" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.794393 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-n2z8v"] Feb 28 04:52:21 crc kubenswrapper[5014]: E0228 04:52:21.794865 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fe0cfd6-ec18-4221-8722-8be777814e26" containerName="init" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.794900 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fe0cfd6-ec18-4221-8722-8be777814e26" containerName="init" Feb 28 04:52:21 crc kubenswrapper[5014]: E0228 04:52:21.794949 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fe0cfd6-ec18-4221-8722-8be777814e26" containerName="dnsmasq-dns" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.794958 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fe0cfd6-ec18-4221-8722-8be777814e26" containerName="dnsmasq-dns" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.795128 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fe0cfd6-ec18-4221-8722-8be777814e26" containerName="dnsmasq-dns" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.795670 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-n2z8v" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.819176 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3ded-account-create-update-jxfct"] Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.828123 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3ded-account-create-update-jxfct" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.830835 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.834913 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3ded-account-create-update-jxfct"] Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.843111 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd179fc0-8f02-477b-88db-7f4e27bc5b5a-operator-scripts\") pod \"keystone-3ded-account-create-update-jxfct\" (UID: \"dd179fc0-8f02-477b-88db-7f4e27bc5b5a\") " pod="openstack/keystone-3ded-account-create-update-jxfct" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.843172 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gghss\" (UniqueName: \"kubernetes.io/projected/dd179fc0-8f02-477b-88db-7f4e27bc5b5a-kube-api-access-gghss\") pod \"keystone-3ded-account-create-update-jxfct\" (UID: \"dd179fc0-8f02-477b-88db-7f4e27bc5b5a\") " pod="openstack/keystone-3ded-account-create-update-jxfct" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.843270 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ee7b14b-72c2-44e4-9e19-5b3351c8adef-operator-scripts\") pod \"keystone-db-create-n2z8v\" (UID: \"7ee7b14b-72c2-44e4-9e19-5b3351c8adef\") " pod="openstack/keystone-db-create-n2z8v" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.843376 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ls52\" (UniqueName: \"kubernetes.io/projected/7ee7b14b-72c2-44e4-9e19-5b3351c8adef-kube-api-access-4ls52\") pod \"keystone-db-create-n2z8v\" (UID: \"7ee7b14b-72c2-44e4-9e19-5b3351c8adef\") " pod="openstack/keystone-db-create-n2z8v" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.846668 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-n2z8v"] Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.923035 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-5f554"] Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.924268 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5f554" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.939166 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5f554"] Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.953921 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gghss\" (UniqueName: \"kubernetes.io/projected/dd179fc0-8f02-477b-88db-7f4e27bc5b5a-kube-api-access-gghss\") pod \"keystone-3ded-account-create-update-jxfct\" (UID: \"dd179fc0-8f02-477b-88db-7f4e27bc5b5a\") " pod="openstack/keystone-3ded-account-create-update-jxfct" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.953961 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ee7b14b-72c2-44e4-9e19-5b3351c8adef-operator-scripts\") pod \"keystone-db-create-n2z8v\" (UID: \"7ee7b14b-72c2-44e4-9e19-5b3351c8adef\") " pod="openstack/keystone-db-create-n2z8v" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.954008 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2f9q\" (UniqueName: \"kubernetes.io/projected/b7a56f69-c15e-45a1-9a37-a8a0d635f307-kube-api-access-d2f9q\") pod \"placement-db-create-5f554\" (UID: \"b7a56f69-c15e-45a1-9a37-a8a0d635f307\") " pod="openstack/placement-db-create-5f554" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.954064 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2998e28e-fceb-4daa-a26c-74bffeba0d8f-etc-swift\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") " pod="openstack/swift-storage-0" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.954093 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ls52\" (UniqueName: \"kubernetes.io/projected/7ee7b14b-72c2-44e4-9e19-5b3351c8adef-kube-api-access-4ls52\") pod \"keystone-db-create-n2z8v\" (UID: \"7ee7b14b-72c2-44e4-9e19-5b3351c8adef\") " pod="openstack/keystone-db-create-n2z8v" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.954117 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7a56f69-c15e-45a1-9a37-a8a0d635f307-operator-scripts\") pod \"placement-db-create-5f554\" (UID: \"b7a56f69-c15e-45a1-9a37-a8a0d635f307\") " pod="openstack/placement-db-create-5f554" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.954230 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd179fc0-8f02-477b-88db-7f4e27bc5b5a-operator-scripts\") pod \"keystone-3ded-account-create-update-jxfct\" (UID: \"dd179fc0-8f02-477b-88db-7f4e27bc5b5a\") " pod="openstack/keystone-3ded-account-create-update-jxfct" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.955020 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd179fc0-8f02-477b-88db-7f4e27bc5b5a-operator-scripts\") pod \"keystone-3ded-account-create-update-jxfct\" (UID: \"dd179fc0-8f02-477b-88db-7f4e27bc5b5a\") " pod="openstack/keystone-3ded-account-create-update-jxfct" Feb 28 04:52:21 crc kubenswrapper[5014]: E0228 04:52:21.955125 5014 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 28 04:52:21 crc kubenswrapper[5014]: E0228 04:52:21.955144 5014 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 28 04:52:21 crc kubenswrapper[5014]: E0228 04:52:21.955178 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2998e28e-fceb-4daa-a26c-74bffeba0d8f-etc-swift podName:2998e28e-fceb-4daa-a26c-74bffeba0d8f nodeName:}" failed. No retries permitted until 2026-02-28 04:52:29.955166033 +0000 UTC m=+1138.625291943 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2998e28e-fceb-4daa-a26c-74bffeba0d8f-etc-swift") pod "swift-storage-0" (UID: "2998e28e-fceb-4daa-a26c-74bffeba0d8f") : configmap "swift-ring-files" not found Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.956117 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ee7b14b-72c2-44e4-9e19-5b3351c8adef-operator-scripts\") pod \"keystone-db-create-n2z8v\" (UID: \"7ee7b14b-72c2-44e4-9e19-5b3351c8adef\") " pod="openstack/keystone-db-create-n2z8v" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.975630 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gghss\" (UniqueName: \"kubernetes.io/projected/dd179fc0-8f02-477b-88db-7f4e27bc5b5a-kube-api-access-gghss\") pod \"keystone-3ded-account-create-update-jxfct\" (UID: \"dd179fc0-8f02-477b-88db-7f4e27bc5b5a\") " pod="openstack/keystone-3ded-account-create-update-jxfct" Feb 28 04:52:21 crc kubenswrapper[5014]: I0228 04:52:21.976482 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ls52\" (UniqueName: \"kubernetes.io/projected/7ee7b14b-72c2-44e4-9e19-5b3351c8adef-kube-api-access-4ls52\") pod \"keystone-db-create-n2z8v\" (UID: \"7ee7b14b-72c2-44e4-9e19-5b3351c8adef\") " pod="openstack/keystone-db-create-n2z8v" Feb 28 04:52:22 crc kubenswrapper[5014]: I0228 04:52:22.055779 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2f9q\" (UniqueName: \"kubernetes.io/projected/b7a56f69-c15e-45a1-9a37-a8a0d635f307-kube-api-access-d2f9q\") pod \"placement-db-create-5f554\" (UID: \"b7a56f69-c15e-45a1-9a37-a8a0d635f307\") " pod="openstack/placement-db-create-5f554" Feb 28 04:52:22 crc kubenswrapper[5014]: I0228 04:52:22.056242 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7a56f69-c15e-45a1-9a37-a8a0d635f307-operator-scripts\") pod \"placement-db-create-5f554\" (UID: \"b7a56f69-c15e-45a1-9a37-a8a0d635f307\") " pod="openstack/placement-db-create-5f554" Feb 28 04:52:22 crc kubenswrapper[5014]: I0228 04:52:22.057052 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7a56f69-c15e-45a1-9a37-a8a0d635f307-operator-scripts\") pod \"placement-db-create-5f554\" (UID: \"b7a56f69-c15e-45a1-9a37-a8a0d635f307\") " pod="openstack/placement-db-create-5f554" Feb 28 04:52:22 crc kubenswrapper[5014]: I0228 04:52:22.077651 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22702874-a9ba-4491-aed2-5ef93384150c","Type":"ContainerStarted","Data":"5599be619d91451fa96456109042f3f6b847832bed18a86c177087edd1bc90a3"} Feb 28 04:52:22 crc kubenswrapper[5014]: E0228 04:52:22.080772 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified\\\"\"" pod="openstack/ovn-northd-0" podUID="22702874-a9ba-4491-aed2-5ef93384150c" Feb 28 04:52:22 crc kubenswrapper[5014]: I0228 04:52:22.082403 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2f9q\" (UniqueName: \"kubernetes.io/projected/b7a56f69-c15e-45a1-9a37-a8a0d635f307-kube-api-access-d2f9q\") pod \"placement-db-create-5f554\" (UID: \"b7a56f69-c15e-45a1-9a37-a8a0d635f307\") " pod="openstack/placement-db-create-5f554" Feb 28 04:52:22 crc kubenswrapper[5014]: I0228 04:52:22.086315 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-3523-account-create-update-xd6xg"] Feb 28 04:52:22 crc kubenswrapper[5014]: I0228 04:52:22.087664 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3523-account-create-update-xd6xg" Feb 28 04:52:22 crc kubenswrapper[5014]: I0228 04:52:22.088904 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 28 04:52:22 crc kubenswrapper[5014]: I0228 04:52:22.122207 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-n2z8v" Feb 28 04:52:22 crc kubenswrapper[5014]: I0228 04:52:22.159180 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7q9z\" (UniqueName: \"kubernetes.io/projected/47438bcb-f130-4a0d-b000-fc61e91a5762-kube-api-access-r7q9z\") pod \"placement-3523-account-create-update-xd6xg\" (UID: \"47438bcb-f130-4a0d-b000-fc61e91a5762\") " pod="openstack/placement-3523-account-create-update-xd6xg" Feb 28 04:52:22 crc kubenswrapper[5014]: I0228 04:52:22.159232 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47438bcb-f130-4a0d-b000-fc61e91a5762-operator-scripts\") pod \"placement-3523-account-create-update-xd6xg\" (UID: \"47438bcb-f130-4a0d-b000-fc61e91a5762\") " pod="openstack/placement-3523-account-create-update-xd6xg" Feb 28 04:52:22 crc kubenswrapper[5014]: I0228 04:52:22.160769 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3ded-account-create-update-jxfct" Feb 28 04:52:22 crc kubenswrapper[5014]: I0228 04:52:22.170466 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3523-account-create-update-xd6xg"] Feb 28 04:52:22 crc kubenswrapper[5014]: I0228 04:52:22.258642 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5f554" Feb 28 04:52:22 crc kubenswrapper[5014]: I0228 04:52:22.260600 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7q9z\" (UniqueName: \"kubernetes.io/projected/47438bcb-f130-4a0d-b000-fc61e91a5762-kube-api-access-r7q9z\") pod \"placement-3523-account-create-update-xd6xg\" (UID: \"47438bcb-f130-4a0d-b000-fc61e91a5762\") " pod="openstack/placement-3523-account-create-update-xd6xg" Feb 28 04:52:22 crc kubenswrapper[5014]: I0228 04:52:22.260643 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47438bcb-f130-4a0d-b000-fc61e91a5762-operator-scripts\") pod \"placement-3523-account-create-update-xd6xg\" (UID: \"47438bcb-f130-4a0d-b000-fc61e91a5762\") " pod="openstack/placement-3523-account-create-update-xd6xg" Feb 28 04:52:22 crc kubenswrapper[5014]: I0228 04:52:22.272342 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47438bcb-f130-4a0d-b000-fc61e91a5762-operator-scripts\") pod \"placement-3523-account-create-update-xd6xg\" (UID: \"47438bcb-f130-4a0d-b000-fc61e91a5762\") " pod="openstack/placement-3523-account-create-update-xd6xg" Feb 28 04:52:22 crc kubenswrapper[5014]: I0228 04:52:22.277340 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7q9z\" (UniqueName: \"kubernetes.io/projected/47438bcb-f130-4a0d-b000-fc61e91a5762-kube-api-access-r7q9z\") pod \"placement-3523-account-create-update-xd6xg\" (UID: \"47438bcb-f130-4a0d-b000-fc61e91a5762\") " pod="openstack/placement-3523-account-create-update-xd6xg" Feb 28 04:52:22 crc kubenswrapper[5014]: I0228 04:52:22.459186 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3523-account-create-update-xd6xg" Feb 28 04:52:22 crc kubenswrapper[5014]: I0228 04:52:22.552083 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 28 04:52:22 crc kubenswrapper[5014]: I0228 04:52:22.646085 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 28 04:52:23 crc kubenswrapper[5014]: E0228 04:52:23.093021 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-northd\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified\\\"\"" pod="openstack/ovn-northd-0" podUID="22702874-a9ba-4491-aed2-5ef93384150c" Feb 28 04:52:23 crc kubenswrapper[5014]: I0228 04:52:23.321066 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-kgdjz" Feb 28 04:52:23 crc kubenswrapper[5014]: I0228 04:52:23.375559 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-h9blq"] Feb 28 04:52:23 crc kubenswrapper[5014]: I0228 04:52:23.375849 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" podUID="6bfcf6cb-666f-44c2-885b-f916a1e81b8f" containerName="dnsmasq-dns" containerID="cri-o://ced0c0d22ef31cfa6e340fd47580c4cd1815ff4a32d0a7c8a2e1bba831a98d57" gracePeriod=10 Feb 28 04:52:24 crc kubenswrapper[5014]: I0228 04:52:24.100125 5014 generic.go:334] "Generic (PLEG): container finished" podID="6bfcf6cb-666f-44c2-885b-f916a1e81b8f" containerID="ced0c0d22ef31cfa6e340fd47580c4cd1815ff4a32d0a7c8a2e1bba831a98d57" exitCode=0 Feb 28 04:52:24 crc kubenswrapper[5014]: I0228 04:52:24.100418 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" event={"ID":"6bfcf6cb-666f-44c2-885b-f916a1e81b8f","Type":"ContainerDied","Data":"ced0c0d22ef31cfa6e340fd47580c4cd1815ff4a32d0a7c8a2e1bba831a98d57"} Feb 28 04:52:24 crc kubenswrapper[5014]: I0228 04:52:24.463720 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" Feb 28 04:52:24 crc kubenswrapper[5014]: I0228 04:52:24.598062 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-config\") pod \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\" (UID: \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\") " Feb 28 04:52:24 crc kubenswrapper[5014]: I0228 04:52:24.598374 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smzbc\" (UniqueName: \"kubernetes.io/projected/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-kube-api-access-smzbc\") pod \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\" (UID: \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\") " Feb 28 04:52:24 crc kubenswrapper[5014]: I0228 04:52:24.598400 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-ovsdbserver-nb\") pod \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\" (UID: \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\") " Feb 28 04:52:24 crc kubenswrapper[5014]: I0228 04:52:24.598422 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-dns-svc\") pod \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\" (UID: \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\") " Feb 28 04:52:24 crc kubenswrapper[5014]: I0228 04:52:24.598466 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-ovsdbserver-sb\") pod \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\" (UID: \"6bfcf6cb-666f-44c2-885b-f916a1e81b8f\") " Feb 28 04:52:24 crc kubenswrapper[5014]: I0228 04:52:24.608340 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-kube-api-access-smzbc" (OuterVolumeSpecName: "kube-api-access-smzbc") pod "6bfcf6cb-666f-44c2-885b-f916a1e81b8f" (UID: "6bfcf6cb-666f-44c2-885b-f916a1e81b8f"). InnerVolumeSpecName "kube-api-access-smzbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:52:24 crc kubenswrapper[5014]: I0228 04:52:24.634929 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-config" (OuterVolumeSpecName: "config") pod "6bfcf6cb-666f-44c2-885b-f916a1e81b8f" (UID: "6bfcf6cb-666f-44c2-885b-f916a1e81b8f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:24 crc kubenswrapper[5014]: I0228 04:52:24.639923 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6bfcf6cb-666f-44c2-885b-f916a1e81b8f" (UID: "6bfcf6cb-666f-44c2-885b-f916a1e81b8f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:24 crc kubenswrapper[5014]: I0228 04:52:24.639948 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6bfcf6cb-666f-44c2-885b-f916a1e81b8f" (UID: "6bfcf6cb-666f-44c2-885b-f916a1e81b8f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:24 crc kubenswrapper[5014]: I0228 04:52:24.640626 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6bfcf6cb-666f-44c2-885b-f916a1e81b8f" (UID: "6bfcf6cb-666f-44c2-885b-f916a1e81b8f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:24 crc kubenswrapper[5014]: I0228 04:52:24.699922 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:24 crc kubenswrapper[5014]: I0228 04:52:24.699955 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smzbc\" (UniqueName: \"kubernetes.io/projected/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-kube-api-access-smzbc\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:24 crc kubenswrapper[5014]: I0228 04:52:24.699967 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:24 crc kubenswrapper[5014]: I0228 04:52:24.699978 5014 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:24 crc kubenswrapper[5014]: I0228 04:52:24.699986 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bfcf6cb-666f-44c2-885b-f916a1e81b8f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:24 crc kubenswrapper[5014]: I0228 04:52:24.719960 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3523-account-create-update-xd6xg"] Feb 28 04:52:24 crc kubenswrapper[5014]: I0228 04:52:24.795619 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3ded-account-create-update-jxfct"] Feb 28 04:52:24 crc kubenswrapper[5014]: I0228 04:52:24.803043 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-n2z8v"] Feb 28 04:52:24 crc kubenswrapper[5014]: I0228 04:52:24.820623 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-5f554"] Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.110279 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3523-account-create-update-xd6xg" event={"ID":"47438bcb-f130-4a0d-b000-fc61e91a5762","Type":"ContainerStarted","Data":"074da8711d8579fc6973bca64ac7f723d59c79406e212a6740aeb5e9ed872931"} Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.110614 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3523-account-create-update-xd6xg" event={"ID":"47438bcb-f130-4a0d-b000-fc61e91a5762","Type":"ContainerStarted","Data":"f69dc4251f8c392a4a3dee0b08dd8bdbec7830b8cfe79cc75aefe6ebbba937f8"} Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.112425 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dn9mn" event={"ID":"15c6e56b-a312-43c9-b627-af4138518fe4","Type":"ContainerStarted","Data":"0e1bba1257ee79b718d75f8c65c121cd6e3c770f2167d36f9cf41455c346bcfa"} Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.114896 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.114894 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-h9blq" event={"ID":"6bfcf6cb-666f-44c2-885b-f916a1e81b8f","Type":"ContainerDied","Data":"5333c16832b171490af02df9a6c507adc8f7e1b74465366d29d500f9463b2cba"} Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.115034 5014 scope.go:117] "RemoveContainer" containerID="ced0c0d22ef31cfa6e340fd47580c4cd1815ff4a32d0a7c8a2e1bba831a98d57" Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.116316 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-n2z8v" event={"ID":"7ee7b14b-72c2-44e4-9e19-5b3351c8adef","Type":"ContainerStarted","Data":"4460f1174c4ed73725408e55d6342a6bdac47f876e73b1cd67cd81e087589dbf"} Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.116358 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-n2z8v" event={"ID":"7ee7b14b-72c2-44e4-9e19-5b3351c8adef","Type":"ContainerStarted","Data":"697de46ec50ff1adf90776086922ce772a7f6e27634a012bbda52c7e6968190e"} Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.118713 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3ded-account-create-update-jxfct" event={"ID":"dd179fc0-8f02-477b-88db-7f4e27bc5b5a","Type":"ContainerStarted","Data":"fddefa038a4ad353a98dce29b7ce157696f0942effef9f7c70434972964ef3f8"} Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.118756 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3ded-account-create-update-jxfct" event={"ID":"dd179fc0-8f02-477b-88db-7f4e27bc5b5a","Type":"ContainerStarted","Data":"8e5a776fc3fd9243c352e55787a18a4a5316502e9cc4ca981cb85f8ea2f088b8"} Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.121123 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5f554" event={"ID":"b7a56f69-c15e-45a1-9a37-a8a0d635f307","Type":"ContainerStarted","Data":"a113d8261bf42819854fc91b840910fa4734009b3c0509bd87318623d24afbbb"} Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.121175 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5f554" event={"ID":"b7a56f69-c15e-45a1-9a37-a8a0d635f307","Type":"ContainerStarted","Data":"1c7d67cfdab75b1be3da89428229dcbd9995555dabaeb23e6ffcab9e3d634bcb"} Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.140191 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-3523-account-create-update-xd6xg" podStartSLOduration=3.14016921 podStartE2EDuration="3.14016921s" podCreationTimestamp="2026-02-28 04:52:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:52:25.129497282 +0000 UTC m=+1133.799623192" watchObservedRunningTime="2026-02-28 04:52:25.14016921 +0000 UTC m=+1133.810295130" Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.149962 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-n2z8v" podStartSLOduration=4.149939563 podStartE2EDuration="4.149939563s" podCreationTimestamp="2026-02-28 04:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:52:25.142894954 +0000 UTC m=+1133.813020864" watchObservedRunningTime="2026-02-28 04:52:25.149939563 +0000 UTC m=+1133.820065473" Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.153602 5014 scope.go:117] "RemoveContainer" containerID="e0f66f1f5a0a23fb2e474489605f990c5d7f13b76b4f764250127d8395ebb8da" Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.181429 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-dn9mn" podStartSLOduration=2.179766393 podStartE2EDuration="11.181403301s" podCreationTimestamp="2026-02-28 04:52:14 +0000 UTC" firstStartedPulling="2026-02-28 04:52:15.219549798 +0000 UTC m=+1123.889675718" lastFinishedPulling="2026-02-28 04:52:24.221186716 +0000 UTC m=+1132.891312626" observedRunningTime="2026-02-28 04:52:25.176995373 +0000 UTC m=+1133.847121283" watchObservedRunningTime="2026-02-28 04:52:25.181403301 +0000 UTC m=+1133.851529211" Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.205206 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-3ded-account-create-update-jxfct" podStartSLOduration=4.205180333 podStartE2EDuration="4.205180333s" podCreationTimestamp="2026-02-28 04:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:52:25.201623948 +0000 UTC m=+1133.871749858" watchObservedRunningTime="2026-02-28 04:52:25.205180333 +0000 UTC m=+1133.875306243" Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.218035 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-5f554" podStartSLOduration=4.218012049 podStartE2EDuration="4.218012049s" podCreationTimestamp="2026-02-28 04:52:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:52:25.214628108 +0000 UTC m=+1133.884754018" watchObservedRunningTime="2026-02-28 04:52:25.218012049 +0000 UTC m=+1133.888137959" Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.231122 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-h9blq"] Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.238862 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-h9blq"] Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.866745 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-jsf2v"] Feb 28 04:52:25 crc kubenswrapper[5014]: E0228 04:52:25.867256 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bfcf6cb-666f-44c2-885b-f916a1e81b8f" containerName="dnsmasq-dns" Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.867280 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bfcf6cb-666f-44c2-885b-f916a1e81b8f" containerName="dnsmasq-dns" Feb 28 04:52:25 crc kubenswrapper[5014]: E0228 04:52:25.867328 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bfcf6cb-666f-44c2-885b-f916a1e81b8f" containerName="init" Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.867339 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bfcf6cb-666f-44c2-885b-f916a1e81b8f" containerName="init" Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.867597 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bfcf6cb-666f-44c2-885b-f916a1e81b8f" containerName="dnsmasq-dns" Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.868433 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jsf2v" Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.878754 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jsf2v"] Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.973755 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-400e-account-create-update-xtzgq"] Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.975340 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-400e-account-create-update-xtzgq" Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.979773 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 28 04:52:25 crc kubenswrapper[5014]: I0228 04:52:25.981178 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-400e-account-create-update-xtzgq"] Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.021574 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssvdv\" (UniqueName: \"kubernetes.io/projected/6efb968a-6151-439b-a324-e36d9c8b2dee-kube-api-access-ssvdv\") pod \"glance-db-create-jsf2v\" (UID: \"6efb968a-6151-439b-a324-e36d9c8b2dee\") " pod="openstack/glance-db-create-jsf2v" Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.022177 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6efb968a-6151-439b-a324-e36d9c8b2dee-operator-scripts\") pod \"glance-db-create-jsf2v\" (UID: \"6efb968a-6151-439b-a324-e36d9c8b2dee\") " pod="openstack/glance-db-create-jsf2v" Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.124980 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-928n7\" (UniqueName: \"kubernetes.io/projected/3e6957be-e258-44d6-b0d3-e1317a0310c1-kube-api-access-928n7\") pod \"glance-400e-account-create-update-xtzgq\" (UID: \"3e6957be-e258-44d6-b0d3-e1317a0310c1\") " pod="openstack/glance-400e-account-create-update-xtzgq" Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.125157 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e6957be-e258-44d6-b0d3-e1317a0310c1-operator-scripts\") pod \"glance-400e-account-create-update-xtzgq\" (UID: \"3e6957be-e258-44d6-b0d3-e1317a0310c1\") " pod="openstack/glance-400e-account-create-update-xtzgq" Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.125506 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssvdv\" (UniqueName: \"kubernetes.io/projected/6efb968a-6151-439b-a324-e36d9c8b2dee-kube-api-access-ssvdv\") pod \"glance-db-create-jsf2v\" (UID: \"6efb968a-6151-439b-a324-e36d9c8b2dee\") " pod="openstack/glance-db-create-jsf2v" Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.125561 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6efb968a-6151-439b-a324-e36d9c8b2dee-operator-scripts\") pod \"glance-db-create-jsf2v\" (UID: \"6efb968a-6151-439b-a324-e36d9c8b2dee\") " pod="openstack/glance-db-create-jsf2v" Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.126688 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6efb968a-6151-439b-a324-e36d9c8b2dee-operator-scripts\") pod \"glance-db-create-jsf2v\" (UID: \"6efb968a-6151-439b-a324-e36d9c8b2dee\") " pod="openstack/glance-db-create-jsf2v" Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.134092 5014 generic.go:334] "Generic (PLEG): container finished" podID="7ee7b14b-72c2-44e4-9e19-5b3351c8adef" containerID="4460f1174c4ed73725408e55d6342a6bdac47f876e73b1cd67cd81e087589dbf" exitCode=0 Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.134173 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-n2z8v" event={"ID":"7ee7b14b-72c2-44e4-9e19-5b3351c8adef","Type":"ContainerDied","Data":"4460f1174c4ed73725408e55d6342a6bdac47f876e73b1cd67cd81e087589dbf"} Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.135569 5014 generic.go:334] "Generic (PLEG): container finished" podID="dd179fc0-8f02-477b-88db-7f4e27bc5b5a" containerID="fddefa038a4ad353a98dce29b7ce157696f0942effef9f7c70434972964ef3f8" exitCode=0 Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.135629 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3ded-account-create-update-jxfct" event={"ID":"dd179fc0-8f02-477b-88db-7f4e27bc5b5a","Type":"ContainerDied","Data":"fddefa038a4ad353a98dce29b7ce157696f0942effef9f7c70434972964ef3f8"} Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.141346 5014 generic.go:334] "Generic (PLEG): container finished" podID="b7a56f69-c15e-45a1-9a37-a8a0d635f307" containerID="a113d8261bf42819854fc91b840910fa4734009b3c0509bd87318623d24afbbb" exitCode=0 Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.141418 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5f554" event={"ID":"b7a56f69-c15e-45a1-9a37-a8a0d635f307","Type":"ContainerDied","Data":"a113d8261bf42819854fc91b840910fa4734009b3c0509bd87318623d24afbbb"} Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.143134 5014 generic.go:334] "Generic (PLEG): container finished" podID="47438bcb-f130-4a0d-b000-fc61e91a5762" containerID="074da8711d8579fc6973bca64ac7f723d59c79406e212a6740aeb5e9ed872931" exitCode=0 Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.143929 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3523-account-create-update-xd6xg" event={"ID":"47438bcb-f130-4a0d-b000-fc61e91a5762","Type":"ContainerDied","Data":"074da8711d8579fc6973bca64ac7f723d59c79406e212a6740aeb5e9ed872931"} Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.145344 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssvdv\" (UniqueName: \"kubernetes.io/projected/6efb968a-6151-439b-a324-e36d9c8b2dee-kube-api-access-ssvdv\") pod \"glance-db-create-jsf2v\" (UID: \"6efb968a-6151-439b-a324-e36d9c8b2dee\") " pod="openstack/glance-db-create-jsf2v" Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.193392 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jsf2v" Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.201649 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bfcf6cb-666f-44c2-885b-f916a1e81b8f" path="/var/lib/kubelet/pods/6bfcf6cb-666f-44c2-885b-f916a1e81b8f/volumes" Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.232854 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-928n7\" (UniqueName: \"kubernetes.io/projected/3e6957be-e258-44d6-b0d3-e1317a0310c1-kube-api-access-928n7\") pod \"glance-400e-account-create-update-xtzgq\" (UID: \"3e6957be-e258-44d6-b0d3-e1317a0310c1\") " pod="openstack/glance-400e-account-create-update-xtzgq" Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.232925 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e6957be-e258-44d6-b0d3-e1317a0310c1-operator-scripts\") pod \"glance-400e-account-create-update-xtzgq\" (UID: \"3e6957be-e258-44d6-b0d3-e1317a0310c1\") " pod="openstack/glance-400e-account-create-update-xtzgq" Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.236034 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e6957be-e258-44d6-b0d3-e1317a0310c1-operator-scripts\") pod \"glance-400e-account-create-update-xtzgq\" (UID: \"3e6957be-e258-44d6-b0d3-e1317a0310c1\") " pod="openstack/glance-400e-account-create-update-xtzgq" Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.263566 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-928n7\" (UniqueName: \"kubernetes.io/projected/3e6957be-e258-44d6-b0d3-e1317a0310c1-kube-api-access-928n7\") pod \"glance-400e-account-create-update-xtzgq\" (UID: \"3e6957be-e258-44d6-b0d3-e1317a0310c1\") " pod="openstack/glance-400e-account-create-update-xtzgq" Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.309991 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-400e-account-create-update-xtzgq" Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.787798 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jsf2v"] Feb 28 04:52:26 crc kubenswrapper[5014]: W0228 04:52:26.789564 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6efb968a_6151_439b_a324_e36d9c8b2dee.slice/crio-66f5bab03fa89cb1ba853226f5c98cfac93e4f2277fb8d45884f5bbd9df0c5d9 WatchSource:0}: Error finding container 66f5bab03fa89cb1ba853226f5c98cfac93e4f2277fb8d45884f5bbd9df0c5d9: Status 404 returned error can't find the container with id 66f5bab03fa89cb1ba853226f5c98cfac93e4f2277fb8d45884f5bbd9df0c5d9 Feb 28 04:52:26 crc kubenswrapper[5014]: I0228 04:52:26.834456 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-400e-account-create-update-xtzgq"] Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.157147 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-400e-account-create-update-xtzgq" event={"ID":"3e6957be-e258-44d6-b0d3-e1317a0310c1","Type":"ContainerStarted","Data":"be2fa0324dbceeec1f1a48344693a1c48b87ea689ea2e6f070c6a45ed41953e8"} Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.157472 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-400e-account-create-update-xtzgq" event={"ID":"3e6957be-e258-44d6-b0d3-e1317a0310c1","Type":"ContainerStarted","Data":"93733ecabd2e54d5f52bf2b49ff4012f3e691bce877ffd2144f0bbe5a7d99c6d"} Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.159893 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jsf2v" event={"ID":"6efb968a-6151-439b-a324-e36d9c8b2dee","Type":"ContainerStarted","Data":"bf5c7f94241b8576990c9e23cb39a4105490fa24dc0f93456818da8fac53b60b"} Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.159953 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jsf2v" event={"ID":"6efb968a-6151-439b-a324-e36d9c8b2dee","Type":"ContainerStarted","Data":"66f5bab03fa89cb1ba853226f5c98cfac93e4f2277fb8d45884f5bbd9df0c5d9"} Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.173985 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-400e-account-create-update-xtzgq" podStartSLOduration=2.17397088 podStartE2EDuration="2.17397088s" podCreationTimestamp="2026-02-28 04:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:52:27.172038098 +0000 UTC m=+1135.842164008" watchObservedRunningTime="2026-02-28 04:52:27.17397088 +0000 UTC m=+1135.844096790" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.185121 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-jsf2v" podStartSLOduration=2.18510019 podStartE2EDuration="2.18510019s" podCreationTimestamp="2026-02-28 04:52:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:52:27.183312472 +0000 UTC m=+1135.853438392" watchObservedRunningTime="2026-02-28 04:52:27.18510019 +0000 UTC m=+1135.855226100" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.476891 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5f554" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.628164 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-n6vxb"] Feb 28 04:52:27 crc kubenswrapper[5014]: E0228 04:52:27.628510 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7a56f69-c15e-45a1-9a37-a8a0d635f307" containerName="mariadb-database-create" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.628526 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7a56f69-c15e-45a1-9a37-a8a0d635f307" containerName="mariadb-database-create" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.628676 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7a56f69-c15e-45a1-9a37-a8a0d635f307" containerName="mariadb-database-create" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.632155 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-n6vxb" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.639010 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.643017 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-n6vxb"] Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.653441 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7a56f69-c15e-45a1-9a37-a8a0d635f307-operator-scripts\") pod \"b7a56f69-c15e-45a1-9a37-a8a0d635f307\" (UID: \"b7a56f69-c15e-45a1-9a37-a8a0d635f307\") " Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.653494 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2f9q\" (UniqueName: \"kubernetes.io/projected/b7a56f69-c15e-45a1-9a37-a8a0d635f307-kube-api-access-d2f9q\") pod \"b7a56f69-c15e-45a1-9a37-a8a0d635f307\" (UID: \"b7a56f69-c15e-45a1-9a37-a8a0d635f307\") " Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.657147 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7a56f69-c15e-45a1-9a37-a8a0d635f307-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b7a56f69-c15e-45a1-9a37-a8a0d635f307" (UID: "b7a56f69-c15e-45a1-9a37-a8a0d635f307"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.659613 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7a56f69-c15e-45a1-9a37-a8a0d635f307-kube-api-access-d2f9q" (OuterVolumeSpecName: "kube-api-access-d2f9q") pod "b7a56f69-c15e-45a1-9a37-a8a0d635f307" (UID: "b7a56f69-c15e-45a1-9a37-a8a0d635f307"). InnerVolumeSpecName "kube-api-access-d2f9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.699619 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3ded-account-create-update-jxfct" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.719868 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-n2z8v" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.729670 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3523-account-create-update-xd6xg" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.755069 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llk96\" (UniqueName: \"kubernetes.io/projected/159dc98c-67f0-45f3-bdb2-413c2ee86402-kube-api-access-llk96\") pod \"root-account-create-update-n6vxb\" (UID: \"159dc98c-67f0-45f3-bdb2-413c2ee86402\") " pod="openstack/root-account-create-update-n6vxb" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.755124 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/159dc98c-67f0-45f3-bdb2-413c2ee86402-operator-scripts\") pod \"root-account-create-update-n6vxb\" (UID: \"159dc98c-67f0-45f3-bdb2-413c2ee86402\") " pod="openstack/root-account-create-update-n6vxb" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.755322 5014 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b7a56f69-c15e-45a1-9a37-a8a0d635f307-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.755338 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2f9q\" (UniqueName: \"kubernetes.io/projected/b7a56f69-c15e-45a1-9a37-a8a0d635f307-kube-api-access-d2f9q\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.856335 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7q9z\" (UniqueName: \"kubernetes.io/projected/47438bcb-f130-4a0d-b000-fc61e91a5762-kube-api-access-r7q9z\") pod \"47438bcb-f130-4a0d-b000-fc61e91a5762\" (UID: \"47438bcb-f130-4a0d-b000-fc61e91a5762\") " Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.856774 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ee7b14b-72c2-44e4-9e19-5b3351c8adef-operator-scripts\") pod \"7ee7b14b-72c2-44e4-9e19-5b3351c8adef\" (UID: \"7ee7b14b-72c2-44e4-9e19-5b3351c8adef\") " Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.856826 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd179fc0-8f02-477b-88db-7f4e27bc5b5a-operator-scripts\") pod \"dd179fc0-8f02-477b-88db-7f4e27bc5b5a\" (UID: \"dd179fc0-8f02-477b-88db-7f4e27bc5b5a\") " Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.856851 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gghss\" (UniqueName: \"kubernetes.io/projected/dd179fc0-8f02-477b-88db-7f4e27bc5b5a-kube-api-access-gghss\") pod \"dd179fc0-8f02-477b-88db-7f4e27bc5b5a\" (UID: \"dd179fc0-8f02-477b-88db-7f4e27bc5b5a\") " Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.857235 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7ee7b14b-72c2-44e4-9e19-5b3351c8adef-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7ee7b14b-72c2-44e4-9e19-5b3351c8adef" (UID: "7ee7b14b-72c2-44e4-9e19-5b3351c8adef"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.857343 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd179fc0-8f02-477b-88db-7f4e27bc5b5a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd179fc0-8f02-477b-88db-7f4e27bc5b5a" (UID: "dd179fc0-8f02-477b-88db-7f4e27bc5b5a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.857360 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ls52\" (UniqueName: \"kubernetes.io/projected/7ee7b14b-72c2-44e4-9e19-5b3351c8adef-kube-api-access-4ls52\") pod \"7ee7b14b-72c2-44e4-9e19-5b3351c8adef\" (UID: \"7ee7b14b-72c2-44e4-9e19-5b3351c8adef\") " Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.857443 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47438bcb-f130-4a0d-b000-fc61e91a5762-operator-scripts\") pod \"47438bcb-f130-4a0d-b000-fc61e91a5762\" (UID: \"47438bcb-f130-4a0d-b000-fc61e91a5762\") " Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.857835 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47438bcb-f130-4a0d-b000-fc61e91a5762-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "47438bcb-f130-4a0d-b000-fc61e91a5762" (UID: "47438bcb-f130-4a0d-b000-fc61e91a5762"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.857882 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llk96\" (UniqueName: \"kubernetes.io/projected/159dc98c-67f0-45f3-bdb2-413c2ee86402-kube-api-access-llk96\") pod \"root-account-create-update-n6vxb\" (UID: \"159dc98c-67f0-45f3-bdb2-413c2ee86402\") " pod="openstack/root-account-create-update-n6vxb" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.858140 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/159dc98c-67f0-45f3-bdb2-413c2ee86402-operator-scripts\") pod \"root-account-create-update-n6vxb\" (UID: \"159dc98c-67f0-45f3-bdb2-413c2ee86402\") " pod="openstack/root-account-create-update-n6vxb" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.858658 5014 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47438bcb-f130-4a0d-b000-fc61e91a5762-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.858743 5014 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7ee7b14b-72c2-44e4-9e19-5b3351c8adef-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.859084 5014 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd179fc0-8f02-477b-88db-7f4e27bc5b5a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.859142 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/159dc98c-67f0-45f3-bdb2-413c2ee86402-operator-scripts\") pod \"root-account-create-update-n6vxb\" (UID: \"159dc98c-67f0-45f3-bdb2-413c2ee86402\") " pod="openstack/root-account-create-update-n6vxb" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.860028 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47438bcb-f130-4a0d-b000-fc61e91a5762-kube-api-access-r7q9z" (OuterVolumeSpecName: "kube-api-access-r7q9z") pod "47438bcb-f130-4a0d-b000-fc61e91a5762" (UID: "47438bcb-f130-4a0d-b000-fc61e91a5762"). InnerVolumeSpecName "kube-api-access-r7q9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.861019 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd179fc0-8f02-477b-88db-7f4e27bc5b5a-kube-api-access-gghss" (OuterVolumeSpecName: "kube-api-access-gghss") pod "dd179fc0-8f02-477b-88db-7f4e27bc5b5a" (UID: "dd179fc0-8f02-477b-88db-7f4e27bc5b5a"). InnerVolumeSpecName "kube-api-access-gghss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.861174 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ee7b14b-72c2-44e4-9e19-5b3351c8adef-kube-api-access-4ls52" (OuterVolumeSpecName: "kube-api-access-4ls52") pod "7ee7b14b-72c2-44e4-9e19-5b3351c8adef" (UID: "7ee7b14b-72c2-44e4-9e19-5b3351c8adef"). InnerVolumeSpecName "kube-api-access-4ls52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.873868 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llk96\" (UniqueName: \"kubernetes.io/projected/159dc98c-67f0-45f3-bdb2-413c2ee86402-kube-api-access-llk96\") pod \"root-account-create-update-n6vxb\" (UID: \"159dc98c-67f0-45f3-bdb2-413c2ee86402\") " pod="openstack/root-account-create-update-n6vxb" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.960998 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ls52\" (UniqueName: \"kubernetes.io/projected/7ee7b14b-72c2-44e4-9e19-5b3351c8adef-kube-api-access-4ls52\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.961040 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7q9z\" (UniqueName: \"kubernetes.io/projected/47438bcb-f130-4a0d-b000-fc61e91a5762-kube-api-access-r7q9z\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:27 crc kubenswrapper[5014]: I0228 04:52:27.961053 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gghss\" (UniqueName: \"kubernetes.io/projected/dd179fc0-8f02-477b-88db-7f4e27bc5b5a-kube-api-access-gghss\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:28 crc kubenswrapper[5014]: I0228 04:52:28.013749 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-n6vxb" Feb 28 04:52:28 crc kubenswrapper[5014]: I0228 04:52:28.192027 5014 generic.go:334] "Generic (PLEG): container finished" podID="3e6957be-e258-44d6-b0d3-e1317a0310c1" containerID="be2fa0324dbceeec1f1a48344693a1c48b87ea689ea2e6f070c6a45ed41953e8" exitCode=0 Feb 28 04:52:28 crc kubenswrapper[5014]: I0228 04:52:28.204755 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3523-account-create-update-xd6xg" Feb 28 04:52:28 crc kubenswrapper[5014]: I0228 04:52:28.219050 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-n2z8v" Feb 28 04:52:28 crc kubenswrapper[5014]: I0228 04:52:28.219139 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-400e-account-create-update-xtzgq" event={"ID":"3e6957be-e258-44d6-b0d3-e1317a0310c1","Type":"ContainerDied","Data":"be2fa0324dbceeec1f1a48344693a1c48b87ea689ea2e6f070c6a45ed41953e8"} Feb 28 04:52:28 crc kubenswrapper[5014]: I0228 04:52:28.219183 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3523-account-create-update-xd6xg" event={"ID":"47438bcb-f130-4a0d-b000-fc61e91a5762","Type":"ContainerDied","Data":"f69dc4251f8c392a4a3dee0b08dd8bdbec7830b8cfe79cc75aefe6ebbba937f8"} Feb 28 04:52:28 crc kubenswrapper[5014]: I0228 04:52:28.219199 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f69dc4251f8c392a4a3dee0b08dd8bdbec7830b8cfe79cc75aefe6ebbba937f8" Feb 28 04:52:28 crc kubenswrapper[5014]: I0228 04:52:28.219395 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-n2z8v" event={"ID":"7ee7b14b-72c2-44e4-9e19-5b3351c8adef","Type":"ContainerDied","Data":"697de46ec50ff1adf90776086922ce772a7f6e27634a012bbda52c7e6968190e"} Feb 28 04:52:28 crc kubenswrapper[5014]: I0228 04:52:28.219463 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="697de46ec50ff1adf90776086922ce772a7f6e27634a012bbda52c7e6968190e" Feb 28 04:52:28 crc kubenswrapper[5014]: I0228 04:52:28.236266 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3ded-account-create-update-jxfct" event={"ID":"dd179fc0-8f02-477b-88db-7f4e27bc5b5a","Type":"ContainerDied","Data":"8e5a776fc3fd9243c352e55787a18a4a5316502e9cc4ca981cb85f8ea2f088b8"} Feb 28 04:52:28 crc kubenswrapper[5014]: I0228 04:52:28.236335 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e5a776fc3fd9243c352e55787a18a4a5316502e9cc4ca981cb85f8ea2f088b8" Feb 28 04:52:28 crc kubenswrapper[5014]: I0228 04:52:28.236278 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3ded-account-create-update-jxfct" Feb 28 04:52:28 crc kubenswrapper[5014]: I0228 04:52:28.239345 5014 generic.go:334] "Generic (PLEG): container finished" podID="6efb968a-6151-439b-a324-e36d9c8b2dee" containerID="bf5c7f94241b8576990c9e23cb39a4105490fa24dc0f93456818da8fac53b60b" exitCode=0 Feb 28 04:52:28 crc kubenswrapper[5014]: I0228 04:52:28.239438 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jsf2v" event={"ID":"6efb968a-6151-439b-a324-e36d9c8b2dee","Type":"ContainerDied","Data":"bf5c7f94241b8576990c9e23cb39a4105490fa24dc0f93456818da8fac53b60b"} Feb 28 04:52:28 crc kubenswrapper[5014]: I0228 04:52:28.243740 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-5f554" Feb 28 04:52:28 crc kubenswrapper[5014]: I0228 04:52:28.243728 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-5f554" event={"ID":"b7a56f69-c15e-45a1-9a37-a8a0d635f307","Type":"ContainerDied","Data":"1c7d67cfdab75b1be3da89428229dcbd9995555dabaeb23e6ffcab9e3d634bcb"} Feb 28 04:52:28 crc kubenswrapper[5014]: I0228 04:52:28.244164 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c7d67cfdab75b1be3da89428229dcbd9995555dabaeb23e6ffcab9e3d634bcb" Feb 28 04:52:28 crc kubenswrapper[5014]: I0228 04:52:28.571170 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-n6vxb"] Feb 28 04:52:29 crc kubenswrapper[5014]: I0228 04:52:29.256083 5014 generic.go:334] "Generic (PLEG): container finished" podID="159dc98c-67f0-45f3-bdb2-413c2ee86402" containerID="027f85da454c64f840a013237eb9aba105367f15604330158a90689b04503b70" exitCode=0 Feb 28 04:52:29 crc kubenswrapper[5014]: I0228 04:52:29.256219 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-n6vxb" event={"ID":"159dc98c-67f0-45f3-bdb2-413c2ee86402","Type":"ContainerDied","Data":"027f85da454c64f840a013237eb9aba105367f15604330158a90689b04503b70"} Feb 28 04:52:29 crc kubenswrapper[5014]: I0228 04:52:29.257227 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-n6vxb" event={"ID":"159dc98c-67f0-45f3-bdb2-413c2ee86402","Type":"ContainerStarted","Data":"8e1e482b11ab8908fb5d1b81299306046b61c5b8b2fb6eda411e9051910d97b0"} Feb 28 04:52:29 crc kubenswrapper[5014]: I0228 04:52:29.681524 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-400e-account-create-update-xtzgq" Feb 28 04:52:29 crc kubenswrapper[5014]: I0228 04:52:29.689495 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jsf2v" Feb 28 04:52:29 crc kubenswrapper[5014]: I0228 04:52:29.806015 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssvdv\" (UniqueName: \"kubernetes.io/projected/6efb968a-6151-439b-a324-e36d9c8b2dee-kube-api-access-ssvdv\") pod \"6efb968a-6151-439b-a324-e36d9c8b2dee\" (UID: \"6efb968a-6151-439b-a324-e36d9c8b2dee\") " Feb 28 04:52:29 crc kubenswrapper[5014]: I0228 04:52:29.806093 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-928n7\" (UniqueName: \"kubernetes.io/projected/3e6957be-e258-44d6-b0d3-e1317a0310c1-kube-api-access-928n7\") pod \"3e6957be-e258-44d6-b0d3-e1317a0310c1\" (UID: \"3e6957be-e258-44d6-b0d3-e1317a0310c1\") " Feb 28 04:52:29 crc kubenswrapper[5014]: I0228 04:52:29.807132 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e6957be-e258-44d6-b0d3-e1317a0310c1-operator-scripts\") pod \"3e6957be-e258-44d6-b0d3-e1317a0310c1\" (UID: \"3e6957be-e258-44d6-b0d3-e1317a0310c1\") " Feb 28 04:52:29 crc kubenswrapper[5014]: I0228 04:52:29.807159 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6efb968a-6151-439b-a324-e36d9c8b2dee-operator-scripts\") pod \"6efb968a-6151-439b-a324-e36d9c8b2dee\" (UID: \"6efb968a-6151-439b-a324-e36d9c8b2dee\") " Feb 28 04:52:29 crc kubenswrapper[5014]: I0228 04:52:29.807759 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e6957be-e258-44d6-b0d3-e1317a0310c1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3e6957be-e258-44d6-b0d3-e1317a0310c1" (UID: "3e6957be-e258-44d6-b0d3-e1317a0310c1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:29 crc kubenswrapper[5014]: I0228 04:52:29.807834 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6efb968a-6151-439b-a324-e36d9c8b2dee-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6efb968a-6151-439b-a324-e36d9c8b2dee" (UID: "6efb968a-6151-439b-a324-e36d9c8b2dee"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:29 crc kubenswrapper[5014]: I0228 04:52:29.812274 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e6957be-e258-44d6-b0d3-e1317a0310c1-kube-api-access-928n7" (OuterVolumeSpecName: "kube-api-access-928n7") pod "3e6957be-e258-44d6-b0d3-e1317a0310c1" (UID: "3e6957be-e258-44d6-b0d3-e1317a0310c1"). InnerVolumeSpecName "kube-api-access-928n7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:52:29 crc kubenswrapper[5014]: I0228 04:52:29.821052 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6efb968a-6151-439b-a324-e36d9c8b2dee-kube-api-access-ssvdv" (OuterVolumeSpecName: "kube-api-access-ssvdv") pod "6efb968a-6151-439b-a324-e36d9c8b2dee" (UID: "6efb968a-6151-439b-a324-e36d9c8b2dee"). InnerVolumeSpecName "kube-api-access-ssvdv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:52:29 crc kubenswrapper[5014]: I0228 04:52:29.908234 5014 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e6957be-e258-44d6-b0d3-e1317a0310c1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:29 crc kubenswrapper[5014]: I0228 04:52:29.908266 5014 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6efb968a-6151-439b-a324-e36d9c8b2dee-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:29 crc kubenswrapper[5014]: I0228 04:52:29.908276 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssvdv\" (UniqueName: \"kubernetes.io/projected/6efb968a-6151-439b-a324-e36d9c8b2dee-kube-api-access-ssvdv\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:29 crc kubenswrapper[5014]: I0228 04:52:29.908287 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-928n7\" (UniqueName: \"kubernetes.io/projected/3e6957be-e258-44d6-b0d3-e1317a0310c1-kube-api-access-928n7\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:30 crc kubenswrapper[5014]: I0228 04:52:30.009690 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2998e28e-fceb-4daa-a26c-74bffeba0d8f-etc-swift\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") " pod="openstack/swift-storage-0" Feb 28 04:52:30 crc kubenswrapper[5014]: E0228 04:52:30.009887 5014 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 28 04:52:30 crc kubenswrapper[5014]: E0228 04:52:30.009915 5014 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 28 04:52:30 crc kubenswrapper[5014]: E0228 04:52:30.009972 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2998e28e-fceb-4daa-a26c-74bffeba0d8f-etc-swift podName:2998e28e-fceb-4daa-a26c-74bffeba0d8f nodeName:}" failed. No retries permitted until 2026-02-28 04:52:46.009953764 +0000 UTC m=+1154.680079674 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/2998e28e-fceb-4daa-a26c-74bffeba0d8f-etc-swift") pod "swift-storage-0" (UID: "2998e28e-fceb-4daa-a26c-74bffeba0d8f") : configmap "swift-ring-files" not found Feb 28 04:52:30 crc kubenswrapper[5014]: I0228 04:52:30.265824 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jsf2v" event={"ID":"6efb968a-6151-439b-a324-e36d9c8b2dee","Type":"ContainerDied","Data":"66f5bab03fa89cb1ba853226f5c98cfac93e4f2277fb8d45884f5bbd9df0c5d9"} Feb 28 04:52:30 crc kubenswrapper[5014]: I0228 04:52:30.265877 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66f5bab03fa89cb1ba853226f5c98cfac93e4f2277fb8d45884f5bbd9df0c5d9" Feb 28 04:52:30 crc kubenswrapper[5014]: I0228 04:52:30.265878 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jsf2v" Feb 28 04:52:30 crc kubenswrapper[5014]: I0228 04:52:30.267188 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-400e-account-create-update-xtzgq" event={"ID":"3e6957be-e258-44d6-b0d3-e1317a0310c1","Type":"ContainerDied","Data":"93733ecabd2e54d5f52bf2b49ff4012f3e691bce877ffd2144f0bbe5a7d99c6d"} Feb 28 04:52:30 crc kubenswrapper[5014]: I0228 04:52:30.267222 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93733ecabd2e54d5f52bf2b49ff4012f3e691bce877ffd2144f0bbe5a7d99c6d" Feb 28 04:52:30 crc kubenswrapper[5014]: I0228 04:52:30.267299 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-400e-account-create-update-xtzgq" Feb 28 04:52:30 crc kubenswrapper[5014]: I0228 04:52:30.593401 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-n6vxb" Feb 28 04:52:30 crc kubenswrapper[5014]: I0228 04:52:30.723100 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/159dc98c-67f0-45f3-bdb2-413c2ee86402-operator-scripts\") pod \"159dc98c-67f0-45f3-bdb2-413c2ee86402\" (UID: \"159dc98c-67f0-45f3-bdb2-413c2ee86402\") " Feb 28 04:52:30 crc kubenswrapper[5014]: I0228 04:52:30.723453 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llk96\" (UniqueName: \"kubernetes.io/projected/159dc98c-67f0-45f3-bdb2-413c2ee86402-kube-api-access-llk96\") pod \"159dc98c-67f0-45f3-bdb2-413c2ee86402\" (UID: \"159dc98c-67f0-45f3-bdb2-413c2ee86402\") " Feb 28 04:52:30 crc kubenswrapper[5014]: I0228 04:52:30.723887 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/159dc98c-67f0-45f3-bdb2-413c2ee86402-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "159dc98c-67f0-45f3-bdb2-413c2ee86402" (UID: "159dc98c-67f0-45f3-bdb2-413c2ee86402"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:30 crc kubenswrapper[5014]: I0228 04:52:30.724186 5014 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/159dc98c-67f0-45f3-bdb2-413c2ee86402-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:30 crc kubenswrapper[5014]: I0228 04:52:30.731399 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/159dc98c-67f0-45f3-bdb2-413c2ee86402-kube-api-access-llk96" (OuterVolumeSpecName: "kube-api-access-llk96") pod "159dc98c-67f0-45f3-bdb2-413c2ee86402" (UID: "159dc98c-67f0-45f3-bdb2-413c2ee86402"). InnerVolumeSpecName "kube-api-access-llk96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:52:30 crc kubenswrapper[5014]: I0228 04:52:30.777638 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9qps6" podUID="02ab5d98-13ab-483d-b32b-a509bedd8ded" containerName="ovn-controller" probeResult="failure" output=< Feb 28 04:52:30 crc kubenswrapper[5014]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 28 04:52:30 crc kubenswrapper[5014]: > Feb 28 04:52:30 crc kubenswrapper[5014]: I0228 04:52:30.826176 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llk96\" (UniqueName: \"kubernetes.io/projected/159dc98c-67f0-45f3-bdb2-413c2ee86402-kube-api-access-llk96\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.117356 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-tz5jx"] Feb 28 04:52:31 crc kubenswrapper[5014]: E0228 04:52:31.118075 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="159dc98c-67f0-45f3-bdb2-413c2ee86402" containerName="mariadb-account-create-update" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.118168 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="159dc98c-67f0-45f3-bdb2-413c2ee86402" containerName="mariadb-account-create-update" Feb 28 04:52:31 crc kubenswrapper[5014]: E0228 04:52:31.118276 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd179fc0-8f02-477b-88db-7f4e27bc5b5a" containerName="mariadb-account-create-update" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.118360 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd179fc0-8f02-477b-88db-7f4e27bc5b5a" containerName="mariadb-account-create-update" Feb 28 04:52:31 crc kubenswrapper[5014]: E0228 04:52:31.118455 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6efb968a-6151-439b-a324-e36d9c8b2dee" containerName="mariadb-database-create" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.118532 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="6efb968a-6151-439b-a324-e36d9c8b2dee" containerName="mariadb-database-create" Feb 28 04:52:31 crc kubenswrapper[5014]: E0228 04:52:31.118663 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ee7b14b-72c2-44e4-9e19-5b3351c8adef" containerName="mariadb-database-create" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.118734 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ee7b14b-72c2-44e4-9e19-5b3351c8adef" containerName="mariadb-database-create" Feb 28 04:52:31 crc kubenswrapper[5014]: E0228 04:52:31.118835 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47438bcb-f130-4a0d-b000-fc61e91a5762" containerName="mariadb-account-create-update" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.118910 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="47438bcb-f130-4a0d-b000-fc61e91a5762" containerName="mariadb-account-create-update" Feb 28 04:52:31 crc kubenswrapper[5014]: E0228 04:52:31.118994 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6957be-e258-44d6-b0d3-e1317a0310c1" containerName="mariadb-account-create-update" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.119058 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6957be-e258-44d6-b0d3-e1317a0310c1" containerName="mariadb-account-create-update" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.119352 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e6957be-e258-44d6-b0d3-e1317a0310c1" containerName="mariadb-account-create-update" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.119456 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="6efb968a-6151-439b-a324-e36d9c8b2dee" containerName="mariadb-database-create" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.119538 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="47438bcb-f130-4a0d-b000-fc61e91a5762" containerName="mariadb-account-create-update" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.119620 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ee7b14b-72c2-44e4-9e19-5b3351c8adef" containerName="mariadb-database-create" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.119692 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="159dc98c-67f0-45f3-bdb2-413c2ee86402" containerName="mariadb-account-create-update" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.119768 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd179fc0-8f02-477b-88db-7f4e27bc5b5a" containerName="mariadb-account-create-update" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.120501 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tz5jx" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.122719 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qvwbm" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.123166 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.139897 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tz5jx"] Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.233406 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2j248\" (UniqueName: \"kubernetes.io/projected/d232d598-8b65-47f6-a5dc-9d77d37d9b80-kube-api-access-2j248\") pod \"glance-db-sync-tz5jx\" (UID: \"d232d598-8b65-47f6-a5dc-9d77d37d9b80\") " pod="openstack/glance-db-sync-tz5jx" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.233649 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d232d598-8b65-47f6-a5dc-9d77d37d9b80-config-data\") pod \"glance-db-sync-tz5jx\" (UID: \"d232d598-8b65-47f6-a5dc-9d77d37d9b80\") " pod="openstack/glance-db-sync-tz5jx" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.233737 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d232d598-8b65-47f6-a5dc-9d77d37d9b80-combined-ca-bundle\") pod \"glance-db-sync-tz5jx\" (UID: \"d232d598-8b65-47f6-a5dc-9d77d37d9b80\") " pod="openstack/glance-db-sync-tz5jx" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.233930 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d232d598-8b65-47f6-a5dc-9d77d37d9b80-db-sync-config-data\") pod \"glance-db-sync-tz5jx\" (UID: \"d232d598-8b65-47f6-a5dc-9d77d37d9b80\") " pod="openstack/glance-db-sync-tz5jx" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.276211 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-n6vxb" event={"ID":"159dc98c-67f0-45f3-bdb2-413c2ee86402","Type":"ContainerDied","Data":"8e1e482b11ab8908fb5d1b81299306046b61c5b8b2fb6eda411e9051910d97b0"} Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.277250 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e1e482b11ab8908fb5d1b81299306046b61c5b8b2fb6eda411e9051910d97b0" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.276254 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-n6vxb" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.335762 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d232d598-8b65-47f6-a5dc-9d77d37d9b80-db-sync-config-data\") pod \"glance-db-sync-tz5jx\" (UID: \"d232d598-8b65-47f6-a5dc-9d77d37d9b80\") " pod="openstack/glance-db-sync-tz5jx" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.335856 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2j248\" (UniqueName: \"kubernetes.io/projected/d232d598-8b65-47f6-a5dc-9d77d37d9b80-kube-api-access-2j248\") pod \"glance-db-sync-tz5jx\" (UID: \"d232d598-8b65-47f6-a5dc-9d77d37d9b80\") " pod="openstack/glance-db-sync-tz5jx" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.335905 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d232d598-8b65-47f6-a5dc-9d77d37d9b80-config-data\") pod \"glance-db-sync-tz5jx\" (UID: \"d232d598-8b65-47f6-a5dc-9d77d37d9b80\") " pod="openstack/glance-db-sync-tz5jx" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.335970 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d232d598-8b65-47f6-a5dc-9d77d37d9b80-combined-ca-bundle\") pod \"glance-db-sync-tz5jx\" (UID: \"d232d598-8b65-47f6-a5dc-9d77d37d9b80\") " pod="openstack/glance-db-sync-tz5jx" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.340540 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d232d598-8b65-47f6-a5dc-9d77d37d9b80-db-sync-config-data\") pod \"glance-db-sync-tz5jx\" (UID: \"d232d598-8b65-47f6-a5dc-9d77d37d9b80\") " pod="openstack/glance-db-sync-tz5jx" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.341462 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d232d598-8b65-47f6-a5dc-9d77d37d9b80-combined-ca-bundle\") pod \"glance-db-sync-tz5jx\" (UID: \"d232d598-8b65-47f6-a5dc-9d77d37d9b80\") " pod="openstack/glance-db-sync-tz5jx" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.342368 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d232d598-8b65-47f6-a5dc-9d77d37d9b80-config-data\") pod \"glance-db-sync-tz5jx\" (UID: \"d232d598-8b65-47f6-a5dc-9d77d37d9b80\") " pod="openstack/glance-db-sync-tz5jx" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.360479 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2j248\" (UniqueName: \"kubernetes.io/projected/d232d598-8b65-47f6-a5dc-9d77d37d9b80-kube-api-access-2j248\") pod \"glance-db-sync-tz5jx\" (UID: \"d232d598-8b65-47f6-a5dc-9d77d37d9b80\") " pod="openstack/glance-db-sync-tz5jx" Feb 28 04:52:31 crc kubenswrapper[5014]: I0228 04:52:31.436733 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tz5jx" Feb 28 04:52:32 crc kubenswrapper[5014]: I0228 04:52:32.047709 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-tz5jx"] Feb 28 04:52:32 crc kubenswrapper[5014]: W0228 04:52:32.059286 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd232d598_8b65_47f6_a5dc_9d77d37d9b80.slice/crio-050158c92a08f38e760f4c84eb7c0b9e74fafb8edcdedee3b95b5e369c0a8f41 WatchSource:0}: Error finding container 050158c92a08f38e760f4c84eb7c0b9e74fafb8edcdedee3b95b5e369c0a8f41: Status 404 returned error can't find the container with id 050158c92a08f38e760f4c84eb7c0b9e74fafb8edcdedee3b95b5e369c0a8f41 Feb 28 04:52:32 crc kubenswrapper[5014]: I0228 04:52:32.286602 5014 generic.go:334] "Generic (PLEG): container finished" podID="351fb773-0669-41c0-aee8-0469f34d64c9" containerID="6aa052f4b5e7c5a3ad8de9ccf2eb6301e3f49de02844097a1f59be13fb678de0" exitCode=0 Feb 28 04:52:32 crc kubenswrapper[5014]: I0228 04:52:32.286693 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"351fb773-0669-41c0-aee8-0469f34d64c9","Type":"ContainerDied","Data":"6aa052f4b5e7c5a3ad8de9ccf2eb6301e3f49de02844097a1f59be13fb678de0"} Feb 28 04:52:32 crc kubenswrapper[5014]: I0228 04:52:32.290087 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tz5jx" event={"ID":"d232d598-8b65-47f6-a5dc-9d77d37d9b80","Type":"ContainerStarted","Data":"050158c92a08f38e760f4c84eb7c0b9e74fafb8edcdedee3b95b5e369c0a8f41"} Feb 28 04:52:32 crc kubenswrapper[5014]: I0228 04:52:32.291938 5014 generic.go:334] "Generic (PLEG): container finished" podID="15c6e56b-a312-43c9-b627-af4138518fe4" containerID="0e1bba1257ee79b718d75f8c65c121cd6e3c770f2167d36f9cf41455c346bcfa" exitCode=0 Feb 28 04:52:32 crc kubenswrapper[5014]: I0228 04:52:32.292008 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dn9mn" event={"ID":"15c6e56b-a312-43c9-b627-af4138518fe4","Type":"ContainerDied","Data":"0e1bba1257ee79b718d75f8c65c121cd6e3c770f2167d36f9cf41455c346bcfa"} Feb 28 04:52:32 crc kubenswrapper[5014]: I0228 04:52:32.293726 5014 generic.go:334] "Generic (PLEG): container finished" podID="46e75f06-c8df-44b8-a6e4-8f663e8b0a1a" containerID="0d1061f7a0ea20558bdded3c641a52419a84163e5db3bf1d2a4fd9e2cd9544e7" exitCode=0 Feb 28 04:52:32 crc kubenswrapper[5014]: I0228 04:52:32.293759 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a","Type":"ContainerDied","Data":"0d1061f7a0ea20558bdded3c641a52419a84163e5db3bf1d2a4fd9e2cd9544e7"} Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.301865 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a","Type":"ContainerStarted","Data":"ef252e73b4755ebb79ca0372edd6145d6575e4965f3ab8414b00083c7d04ef30"} Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.302660 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.305381 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"351fb773-0669-41c0-aee8-0469f34d64c9","Type":"ContainerStarted","Data":"29a94a8a21103b36ec5a9c08e355416cad5772f0c62b047c91ce146979b30c28"} Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.305911 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.348701 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=49.345054681 podStartE2EDuration="57.348670987s" podCreationTimestamp="2026-02-28 04:51:36 +0000 UTC" firstStartedPulling="2026-02-28 04:51:49.778669221 +0000 UTC m=+1098.448795131" lastFinishedPulling="2026-02-28 04:51:57.782285527 +0000 UTC m=+1106.452411437" observedRunningTime="2026-02-28 04:52:33.337387472 +0000 UTC m=+1142.007513402" watchObservedRunningTime="2026-02-28 04:52:33.348670987 +0000 UTC m=+1142.018796917" Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.368410 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=49.053119432 podStartE2EDuration="57.368388888s" podCreationTimestamp="2026-02-28 04:51:36 +0000 UTC" firstStartedPulling="2026-02-28 04:51:49.427905517 +0000 UTC m=+1098.098031427" lastFinishedPulling="2026-02-28 04:51:57.743174973 +0000 UTC m=+1106.413300883" observedRunningTime="2026-02-28 04:52:33.365937172 +0000 UTC m=+1142.036063102" watchObservedRunningTime="2026-02-28 04:52:33.368388888 +0000 UTC m=+1142.038514798" Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.691100 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.713478 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15c6e56b-a312-43c9-b627-af4138518fe4-scripts\") pod \"15c6e56b-a312-43c9-b627-af4138518fe4\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.713925 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/15c6e56b-a312-43c9-b627-af4138518fe4-etc-swift\") pod \"15c6e56b-a312-43c9-b627-af4138518fe4\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.714053 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/15c6e56b-a312-43c9-b627-af4138518fe4-swiftconf\") pod \"15c6e56b-a312-43c9-b627-af4138518fe4\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.714146 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2dqt\" (UniqueName: \"kubernetes.io/projected/15c6e56b-a312-43c9-b627-af4138518fe4-kube-api-access-x2dqt\") pod \"15c6e56b-a312-43c9-b627-af4138518fe4\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.714209 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/15c6e56b-a312-43c9-b627-af4138518fe4-ring-data-devices\") pod \"15c6e56b-a312-43c9-b627-af4138518fe4\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.714284 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c6e56b-a312-43c9-b627-af4138518fe4-combined-ca-bundle\") pod \"15c6e56b-a312-43c9-b627-af4138518fe4\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.714341 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/15c6e56b-a312-43c9-b627-af4138518fe4-dispersionconf\") pod \"15c6e56b-a312-43c9-b627-af4138518fe4\" (UID: \"15c6e56b-a312-43c9-b627-af4138518fe4\") " Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.715031 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15c6e56b-a312-43c9-b627-af4138518fe4-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "15c6e56b-a312-43c9-b627-af4138518fe4" (UID: "15c6e56b-a312-43c9-b627-af4138518fe4"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.716168 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15c6e56b-a312-43c9-b627-af4138518fe4-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "15c6e56b-a312-43c9-b627-af4138518fe4" (UID: "15c6e56b-a312-43c9-b627-af4138518fe4"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.758569 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15c6e56b-a312-43c9-b627-af4138518fe4-kube-api-access-x2dqt" (OuterVolumeSpecName: "kube-api-access-x2dqt") pod "15c6e56b-a312-43c9-b627-af4138518fe4" (UID: "15c6e56b-a312-43c9-b627-af4138518fe4"). InnerVolumeSpecName "kube-api-access-x2dqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.761712 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15c6e56b-a312-43c9-b627-af4138518fe4-scripts" (OuterVolumeSpecName: "scripts") pod "15c6e56b-a312-43c9-b627-af4138518fe4" (UID: "15c6e56b-a312-43c9-b627-af4138518fe4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.761755 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15c6e56b-a312-43c9-b627-af4138518fe4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15c6e56b-a312-43c9-b627-af4138518fe4" (UID: "15c6e56b-a312-43c9-b627-af4138518fe4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.761724 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15c6e56b-a312-43c9-b627-af4138518fe4-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "15c6e56b-a312-43c9-b627-af4138518fe4" (UID: "15c6e56b-a312-43c9-b627-af4138518fe4"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.788413 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15c6e56b-a312-43c9-b627-af4138518fe4-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "15c6e56b-a312-43c9-b627-af4138518fe4" (UID: "15c6e56b-a312-43c9-b627-af4138518fe4"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.816589 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2dqt\" (UniqueName: \"kubernetes.io/projected/15c6e56b-a312-43c9-b627-af4138518fe4-kube-api-access-x2dqt\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.816744 5014 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/15c6e56b-a312-43c9-b627-af4138518fe4-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.816801 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15c6e56b-a312-43c9-b627-af4138518fe4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.816893 5014 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/15c6e56b-a312-43c9-b627-af4138518fe4-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.816955 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/15c6e56b-a312-43c9-b627-af4138518fe4-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.817007 5014 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/15c6e56b-a312-43c9-b627-af4138518fe4-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:33 crc kubenswrapper[5014]: I0228 04:52:33.817060 5014 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/15c6e56b-a312-43c9-b627-af4138518fe4-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:34 crc kubenswrapper[5014]: I0228 04:52:34.023458 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-n6vxb"] Feb 28 04:52:34 crc kubenswrapper[5014]: I0228 04:52:34.032996 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-n6vxb"] Feb 28 04:52:34 crc kubenswrapper[5014]: I0228 04:52:34.185658 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="159dc98c-67f0-45f3-bdb2-413c2ee86402" path="/var/lib/kubelet/pods/159dc98c-67f0-45f3-bdb2-413c2ee86402/volumes" Feb 28 04:52:34 crc kubenswrapper[5014]: I0228 04:52:34.315460 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-dn9mn" event={"ID":"15c6e56b-a312-43c9-b627-af4138518fe4","Type":"ContainerDied","Data":"77b56616ed923103e2bf7ebf1bc96046c1d726d6acf77156e064ee2d8b068294"} Feb 28 04:52:34 crc kubenswrapper[5014]: I0228 04:52:34.315506 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77b56616ed923103e2bf7ebf1bc96046c1d726d6acf77156e064ee2d8b068294" Feb 28 04:52:34 crc kubenswrapper[5014]: I0228 04:52:34.315530 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-dn9mn" Feb 28 04:52:35 crc kubenswrapper[5014]: I0228 04:52:35.781317 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9qps6" podUID="02ab5d98-13ab-483d-b32b-a509bedd8ded" containerName="ovn-controller" probeResult="failure" output=< Feb 28 04:52:35 crc kubenswrapper[5014]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 28 04:52:35 crc kubenswrapper[5014]: > Feb 28 04:52:35 crc kubenswrapper[5014]: I0228 04:52:35.862417 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6vfgk" Feb 28 04:52:35 crc kubenswrapper[5014]: I0228 04:52:35.874659 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-6vfgk" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.084144 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9qps6-config-sq548"] Feb 28 04:52:36 crc kubenswrapper[5014]: E0228 04:52:36.084568 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15c6e56b-a312-43c9-b627-af4138518fe4" containerName="swift-ring-rebalance" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.084600 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="15c6e56b-a312-43c9-b627-af4138518fe4" containerName="swift-ring-rebalance" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.084825 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="15c6e56b-a312-43c9-b627-af4138518fe4" containerName="swift-ring-rebalance" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.089544 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9qps6-config-sq548" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.091484 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.103492 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9qps6-config-sq548"] Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.257539 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4d965c89-660e-45d1-9bb1-ae5324d9f50a-additional-scripts\") pod \"ovn-controller-9qps6-config-sq548\" (UID: \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\") " pod="openstack/ovn-controller-9qps6-config-sq548" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.257591 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmbzj\" (UniqueName: \"kubernetes.io/projected/4d965c89-660e-45d1-9bb1-ae5324d9f50a-kube-api-access-bmbzj\") pod \"ovn-controller-9qps6-config-sq548\" (UID: \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\") " pod="openstack/ovn-controller-9qps6-config-sq548" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.257649 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d965c89-660e-45d1-9bb1-ae5324d9f50a-scripts\") pod \"ovn-controller-9qps6-config-sq548\" (UID: \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\") " pod="openstack/ovn-controller-9qps6-config-sq548" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.257678 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4d965c89-660e-45d1-9bb1-ae5324d9f50a-var-log-ovn\") pod \"ovn-controller-9qps6-config-sq548\" (UID: \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\") " pod="openstack/ovn-controller-9qps6-config-sq548" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.257703 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4d965c89-660e-45d1-9bb1-ae5324d9f50a-var-run\") pod \"ovn-controller-9qps6-config-sq548\" (UID: \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\") " pod="openstack/ovn-controller-9qps6-config-sq548" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.257747 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4d965c89-660e-45d1-9bb1-ae5324d9f50a-var-run-ovn\") pod \"ovn-controller-9qps6-config-sq548\" (UID: \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\") " pod="openstack/ovn-controller-9qps6-config-sq548" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.359638 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4d965c89-660e-45d1-9bb1-ae5324d9f50a-additional-scripts\") pod \"ovn-controller-9qps6-config-sq548\" (UID: \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\") " pod="openstack/ovn-controller-9qps6-config-sq548" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.359689 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmbzj\" (UniqueName: \"kubernetes.io/projected/4d965c89-660e-45d1-9bb1-ae5324d9f50a-kube-api-access-bmbzj\") pod \"ovn-controller-9qps6-config-sq548\" (UID: \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\") " pod="openstack/ovn-controller-9qps6-config-sq548" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.359735 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d965c89-660e-45d1-9bb1-ae5324d9f50a-scripts\") pod \"ovn-controller-9qps6-config-sq548\" (UID: \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\") " pod="openstack/ovn-controller-9qps6-config-sq548" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.359771 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4d965c89-660e-45d1-9bb1-ae5324d9f50a-var-log-ovn\") pod \"ovn-controller-9qps6-config-sq548\" (UID: \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\") " pod="openstack/ovn-controller-9qps6-config-sq548" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.359789 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4d965c89-660e-45d1-9bb1-ae5324d9f50a-var-run\") pod \"ovn-controller-9qps6-config-sq548\" (UID: \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\") " pod="openstack/ovn-controller-9qps6-config-sq548" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.359901 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4d965c89-660e-45d1-9bb1-ae5324d9f50a-var-run-ovn\") pod \"ovn-controller-9qps6-config-sq548\" (UID: \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\") " pod="openstack/ovn-controller-9qps6-config-sq548" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.360543 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4d965c89-660e-45d1-9bb1-ae5324d9f50a-var-run-ovn\") pod \"ovn-controller-9qps6-config-sq548\" (UID: \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\") " pod="openstack/ovn-controller-9qps6-config-sq548" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.361959 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4d965c89-660e-45d1-9bb1-ae5324d9f50a-additional-scripts\") pod \"ovn-controller-9qps6-config-sq548\" (UID: \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\") " pod="openstack/ovn-controller-9qps6-config-sq548" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.362031 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4d965c89-660e-45d1-9bb1-ae5324d9f50a-var-log-ovn\") pod \"ovn-controller-9qps6-config-sq548\" (UID: \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\") " pod="openstack/ovn-controller-9qps6-config-sq548" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.363010 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4d965c89-660e-45d1-9bb1-ae5324d9f50a-var-run\") pod \"ovn-controller-9qps6-config-sq548\" (UID: \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\") " pod="openstack/ovn-controller-9qps6-config-sq548" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.363789 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d965c89-660e-45d1-9bb1-ae5324d9f50a-scripts\") pod \"ovn-controller-9qps6-config-sq548\" (UID: \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\") " pod="openstack/ovn-controller-9qps6-config-sq548" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.379525 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmbzj\" (UniqueName: \"kubernetes.io/projected/4d965c89-660e-45d1-9bb1-ae5324d9f50a-kube-api-access-bmbzj\") pod \"ovn-controller-9qps6-config-sq548\" (UID: \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\") " pod="openstack/ovn-controller-9qps6-config-sq548" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.409457 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9qps6-config-sq548" Feb 28 04:52:36 crc kubenswrapper[5014]: I0228 04:52:36.875798 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9qps6-config-sq548"] Feb 28 04:52:37 crc kubenswrapper[5014]: I0228 04:52:37.345849 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9qps6-config-sq548" event={"ID":"4d965c89-660e-45d1-9bb1-ae5324d9f50a","Type":"ContainerStarted","Data":"26a6cb788829d03c940e48557c3c66439f547e7508a02ace91020b1052c56647"} Feb 28 04:52:37 crc kubenswrapper[5014]: I0228 04:52:37.346137 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9qps6-config-sq548" event={"ID":"4d965c89-660e-45d1-9bb1-ae5324d9f50a","Type":"ContainerStarted","Data":"f50c1b328a1aba2693d4ec0522801bf619d174241d40e1b39644872426deceb2"} Feb 28 04:52:37 crc kubenswrapper[5014]: I0228 04:52:37.365731 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-9qps6-config-sq548" podStartSLOduration=1.365714145 podStartE2EDuration="1.365714145s" podCreationTimestamp="2026-02-28 04:52:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:52:37.361553902 +0000 UTC m=+1146.031679812" watchObservedRunningTime="2026-02-28 04:52:37.365714145 +0000 UTC m=+1146.035840055" Feb 28 04:52:38 crc kubenswrapper[5014]: I0228 04:52:38.356256 5014 generic.go:334] "Generic (PLEG): container finished" podID="4d965c89-660e-45d1-9bb1-ae5324d9f50a" containerID="26a6cb788829d03c940e48557c3c66439f547e7508a02ace91020b1052c56647" exitCode=0 Feb 28 04:52:38 crc kubenswrapper[5014]: I0228 04:52:38.356448 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9qps6-config-sq548" event={"ID":"4d965c89-660e-45d1-9bb1-ae5324d9f50a","Type":"ContainerDied","Data":"26a6cb788829d03c940e48557c3c66439f547e7508a02ace91020b1052c56647"} Feb 28 04:52:39 crc kubenswrapper[5014]: I0228 04:52:39.040433 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-np8j9"] Feb 28 04:52:39 crc kubenswrapper[5014]: I0228 04:52:39.041989 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-np8j9" Feb 28 04:52:39 crc kubenswrapper[5014]: I0228 04:52:39.045731 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 28 04:52:39 crc kubenswrapper[5014]: I0228 04:52:39.065128 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-np8j9"] Feb 28 04:52:39 crc kubenswrapper[5014]: I0228 04:52:39.221725 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b06b7983-c1b9-433b-b3f7-31b07fe8df22-operator-scripts\") pod \"root-account-create-update-np8j9\" (UID: \"b06b7983-c1b9-433b-b3f7-31b07fe8df22\") " pod="openstack/root-account-create-update-np8j9" Feb 28 04:52:39 crc kubenswrapper[5014]: I0228 04:52:39.221775 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwv2q\" (UniqueName: \"kubernetes.io/projected/b06b7983-c1b9-433b-b3f7-31b07fe8df22-kube-api-access-rwv2q\") pod \"root-account-create-update-np8j9\" (UID: \"b06b7983-c1b9-433b-b3f7-31b07fe8df22\") " pod="openstack/root-account-create-update-np8j9" Feb 28 04:52:39 crc kubenswrapper[5014]: I0228 04:52:39.323311 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b06b7983-c1b9-433b-b3f7-31b07fe8df22-operator-scripts\") pod \"root-account-create-update-np8j9\" (UID: \"b06b7983-c1b9-433b-b3f7-31b07fe8df22\") " pod="openstack/root-account-create-update-np8j9" Feb 28 04:52:39 crc kubenswrapper[5014]: I0228 04:52:39.323372 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwv2q\" (UniqueName: \"kubernetes.io/projected/b06b7983-c1b9-433b-b3f7-31b07fe8df22-kube-api-access-rwv2q\") pod \"root-account-create-update-np8j9\" (UID: \"b06b7983-c1b9-433b-b3f7-31b07fe8df22\") " pod="openstack/root-account-create-update-np8j9" Feb 28 04:52:39 crc kubenswrapper[5014]: I0228 04:52:39.324769 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b06b7983-c1b9-433b-b3f7-31b07fe8df22-operator-scripts\") pod \"root-account-create-update-np8j9\" (UID: \"b06b7983-c1b9-433b-b3f7-31b07fe8df22\") " pod="openstack/root-account-create-update-np8j9" Feb 28 04:52:39 crc kubenswrapper[5014]: I0228 04:52:39.365610 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwv2q\" (UniqueName: \"kubernetes.io/projected/b06b7983-c1b9-433b-b3f7-31b07fe8df22-kube-api-access-rwv2q\") pod \"root-account-create-update-np8j9\" (UID: \"b06b7983-c1b9-433b-b3f7-31b07fe8df22\") " pod="openstack/root-account-create-update-np8j9" Feb 28 04:52:39 crc kubenswrapper[5014]: I0228 04:52:39.665874 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-np8j9" Feb 28 04:52:40 crc kubenswrapper[5014]: I0228 04:52:40.771649 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-9qps6" Feb 28 04:52:44 crc kubenswrapper[5014]: I0228 04:52:44.002761 5014 scope.go:117] "RemoveContainer" containerID="39850295164bea58efb5f7091f3e17f94456f36d9c454a970df3a4e240bc0c36" Feb 28 04:52:46 crc kubenswrapper[5014]: I0228 04:52:46.064328 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2998e28e-fceb-4daa-a26c-74bffeba0d8f-etc-swift\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") " pod="openstack/swift-storage-0" Feb 28 04:52:46 crc kubenswrapper[5014]: I0228 04:52:46.070662 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/2998e28e-fceb-4daa-a26c-74bffeba0d8f-etc-swift\") pod \"swift-storage-0\" (UID: \"2998e28e-fceb-4daa-a26c-74bffeba0d8f\") " pod="openstack/swift-storage-0" Feb 28 04:52:46 crc kubenswrapper[5014]: I0228 04:52:46.176142 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 28 04:52:46 crc kubenswrapper[5014]: I0228 04:52:46.423675 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9qps6-config-sq548" Feb 28 04:52:46 crc kubenswrapper[5014]: I0228 04:52:46.424357 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9qps6-config-sq548" event={"ID":"4d965c89-660e-45d1-9bb1-ae5324d9f50a","Type":"ContainerDied","Data":"f50c1b328a1aba2693d4ec0522801bf619d174241d40e1b39644872426deceb2"} Feb 28 04:52:46 crc kubenswrapper[5014]: I0228 04:52:46.424392 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f50c1b328a1aba2693d4ec0522801bf619d174241d40e1b39644872426deceb2" Feb 28 04:52:46 crc kubenswrapper[5014]: I0228 04:52:46.571881 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4d965c89-660e-45d1-9bb1-ae5324d9f50a-var-run\") pod \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\" (UID: \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\") " Feb 28 04:52:46 crc kubenswrapper[5014]: I0228 04:52:46.571987 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d965c89-660e-45d1-9bb1-ae5324d9f50a-scripts\") pod \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\" (UID: \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\") " Feb 28 04:52:46 crc kubenswrapper[5014]: I0228 04:52:46.572017 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4d965c89-660e-45d1-9bb1-ae5324d9f50a-additional-scripts\") pod \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\" (UID: \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\") " Feb 28 04:52:46 crc kubenswrapper[5014]: I0228 04:52:46.572037 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4d965c89-660e-45d1-9bb1-ae5324d9f50a-var-run-ovn\") pod \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\" (UID: \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\") " Feb 28 04:52:46 crc kubenswrapper[5014]: I0228 04:52:46.572009 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d965c89-660e-45d1-9bb1-ae5324d9f50a-var-run" (OuterVolumeSpecName: "var-run") pod "4d965c89-660e-45d1-9bb1-ae5324d9f50a" (UID: "4d965c89-660e-45d1-9bb1-ae5324d9f50a"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:52:46 crc kubenswrapper[5014]: I0228 04:52:46.572090 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bmbzj\" (UniqueName: \"kubernetes.io/projected/4d965c89-660e-45d1-9bb1-ae5324d9f50a-kube-api-access-bmbzj\") pod \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\" (UID: \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\") " Feb 28 04:52:46 crc kubenswrapper[5014]: I0228 04:52:46.572110 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4d965c89-660e-45d1-9bb1-ae5324d9f50a-var-log-ovn\") pod \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\" (UID: \"4d965c89-660e-45d1-9bb1-ae5324d9f50a\") " Feb 28 04:52:46 crc kubenswrapper[5014]: I0228 04:52:46.572166 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d965c89-660e-45d1-9bb1-ae5324d9f50a-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4d965c89-660e-45d1-9bb1-ae5324d9f50a" (UID: "4d965c89-660e-45d1-9bb1-ae5324d9f50a"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:52:46 crc kubenswrapper[5014]: I0228 04:52:46.572500 5014 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4d965c89-660e-45d1-9bb1-ae5324d9f50a-var-run\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:46 crc kubenswrapper[5014]: I0228 04:52:46.572544 5014 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4d965c89-660e-45d1-9bb1-ae5324d9f50a-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:46 crc kubenswrapper[5014]: I0228 04:52:46.572575 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d965c89-660e-45d1-9bb1-ae5324d9f50a-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4d965c89-660e-45d1-9bb1-ae5324d9f50a" (UID: "4d965c89-660e-45d1-9bb1-ae5324d9f50a"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:52:46 crc kubenswrapper[5014]: I0228 04:52:46.572711 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d965c89-660e-45d1-9bb1-ae5324d9f50a-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "4d965c89-660e-45d1-9bb1-ae5324d9f50a" (UID: "4d965c89-660e-45d1-9bb1-ae5324d9f50a"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:46 crc kubenswrapper[5014]: I0228 04:52:46.573011 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d965c89-660e-45d1-9bb1-ae5324d9f50a-scripts" (OuterVolumeSpecName: "scripts") pod "4d965c89-660e-45d1-9bb1-ae5324d9f50a" (UID: "4d965c89-660e-45d1-9bb1-ae5324d9f50a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:46 crc kubenswrapper[5014]: I0228 04:52:46.576542 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d965c89-660e-45d1-9bb1-ae5324d9f50a-kube-api-access-bmbzj" (OuterVolumeSpecName: "kube-api-access-bmbzj") pod "4d965c89-660e-45d1-9bb1-ae5324d9f50a" (UID: "4d965c89-660e-45d1-9bb1-ae5324d9f50a"). InnerVolumeSpecName "kube-api-access-bmbzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:52:46 crc kubenswrapper[5014]: I0228 04:52:46.673899 5014 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4d965c89-660e-45d1-9bb1-ae5324d9f50a-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:46 crc kubenswrapper[5014]: I0228 04:52:46.674155 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4d965c89-660e-45d1-9bb1-ae5324d9f50a-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:46 crc kubenswrapper[5014]: I0228 04:52:46.674171 5014 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4d965c89-660e-45d1-9bb1-ae5324d9f50a-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:46 crc kubenswrapper[5014]: I0228 04:52:46.674185 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bmbzj\" (UniqueName: \"kubernetes.io/projected/4d965c89-660e-45d1-9bb1-ae5324d9f50a-kube-api-access-bmbzj\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:47 crc kubenswrapper[5014]: I0228 04:52:47.221020 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-np8j9"] Feb 28 04:52:47 crc kubenswrapper[5014]: W0228 04:52:47.221676 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb06b7983_c1b9_433b_b3f7_31b07fe8df22.slice/crio-c5cf14520e07408f2dfab25588f475a8cfca49fe6dc054e98da8fa21aca5d9f7 WatchSource:0}: Error finding container c5cf14520e07408f2dfab25588f475a8cfca49fe6dc054e98da8fa21aca5d9f7: Status 404 returned error can't find the container with id c5cf14520e07408f2dfab25588f475a8cfca49fe6dc054e98da8fa21aca5d9f7 Feb 28 04:52:47 crc kubenswrapper[5014]: I0228 04:52:47.283492 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 28 04:52:47 crc kubenswrapper[5014]: W0228 04:52:47.291141 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2998e28e_fceb_4daa_a26c_74bffeba0d8f.slice/crio-df1ceec2838c2d77981267830d42ca2f8c060d2d0025e84057245870d85834da WatchSource:0}: Error finding container df1ceec2838c2d77981267830d42ca2f8c060d2d0025e84057245870d85834da: Status 404 returned error can't find the container with id df1ceec2838c2d77981267830d42ca2f8c060d2d0025e84057245870d85834da Feb 28 04:52:47 crc kubenswrapper[5014]: I0228 04:52:47.434478 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22702874-a9ba-4491-aed2-5ef93384150c","Type":"ContainerStarted","Data":"f961526d87795e98c27c50e183060ac381286cd2bea2f2d85ee1249f982842b0"} Feb 28 04:52:47 crc kubenswrapper[5014]: I0228 04:52:47.435401 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 28 04:52:47 crc kubenswrapper[5014]: I0228 04:52:47.436596 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tz5jx" event={"ID":"d232d598-8b65-47f6-a5dc-9d77d37d9b80","Type":"ContainerStarted","Data":"d0a82c59ea00be18e303205194b256bdc9ef9541536c4aa13de12fb8aadfcf04"} Feb 28 04:52:47 crc kubenswrapper[5014]: I0228 04:52:47.438697 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2998e28e-fceb-4daa-a26c-74bffeba0d8f","Type":"ContainerStarted","Data":"df1ceec2838c2d77981267830d42ca2f8c060d2d0025e84057245870d85834da"} Feb 28 04:52:47 crc kubenswrapper[5014]: I0228 04:52:47.440274 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9qps6-config-sq548" Feb 28 04:52:47 crc kubenswrapper[5014]: I0228 04:52:47.440284 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-np8j9" event={"ID":"b06b7983-c1b9-433b-b3f7-31b07fe8df22","Type":"ContainerStarted","Data":"c5cf14520e07408f2dfab25588f475a8cfca49fe6dc054e98da8fa21aca5d9f7"} Feb 28 04:52:47 crc kubenswrapper[5014]: I0228 04:52:47.486452 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.76406533 podStartE2EDuration="41.486429402s" podCreationTimestamp="2026-02-28 04:52:06 +0000 UTC" firstStartedPulling="2026-02-28 04:52:06.944946589 +0000 UTC m=+1115.615072539" lastFinishedPulling="2026-02-28 04:52:46.667310701 +0000 UTC m=+1155.337436611" observedRunningTime="2026-02-28 04:52:47.463110023 +0000 UTC m=+1156.133235953" watchObservedRunningTime="2026-02-28 04:52:47.486429402 +0000 UTC m=+1156.156555322" Feb 28 04:52:47 crc kubenswrapper[5014]: I0228 04:52:47.493579 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-tz5jx" podStartSLOduration=1.8726514380000001 podStartE2EDuration="16.493562314s" podCreationTimestamp="2026-02-28 04:52:31 +0000 UTC" firstStartedPulling="2026-02-28 04:52:32.061097311 +0000 UTC m=+1140.731223221" lastFinishedPulling="2026-02-28 04:52:46.682008187 +0000 UTC m=+1155.352134097" observedRunningTime="2026-02-28 04:52:47.479960157 +0000 UTC m=+1156.150086067" watchObservedRunningTime="2026-02-28 04:52:47.493562314 +0000 UTC m=+1156.163688234" Feb 28 04:52:47 crc kubenswrapper[5014]: E0228 04:52:47.518952 5014 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4d965c89_660e_45d1_9bb1_ae5324d9f50a.slice/crio-f50c1b328a1aba2693d4ec0522801bf619d174241d40e1b39644872426deceb2\": RecentStats: unable to find data in memory cache]" Feb 28 04:52:47 crc kubenswrapper[5014]: I0228 04:52:47.572371 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-9qps6-config-sq548"] Feb 28 04:52:47 crc kubenswrapper[5014]: I0228 04:52:47.586638 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-9qps6-config-sq548"] Feb 28 04:52:47 crc kubenswrapper[5014]: I0228 04:52:47.632615 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="46e75f06-c8df-44b8-a6e4-8f663e8b0a1a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Feb 28 04:52:47 crc kubenswrapper[5014]: I0228 04:52:47.950289 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.188376 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d965c89-660e-45d1-9bb1-ae5324d9f50a" path="/var/lib/kubelet/pods/4d965c89-660e-45d1-9bb1-ae5324d9f50a/volumes" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.321064 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-xhkk8"] Feb 28 04:52:48 crc kubenswrapper[5014]: E0228 04:52:48.321396 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d965c89-660e-45d1-9bb1-ae5324d9f50a" containerName="ovn-config" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.321408 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d965c89-660e-45d1-9bb1-ae5324d9f50a" containerName="ovn-config" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.321554 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d965c89-660e-45d1-9bb1-ae5324d9f50a" containerName="ovn-config" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.322049 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xhkk8" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.369219 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xhkk8"] Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.425043 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kks5r\" (UniqueName: \"kubernetes.io/projected/6ddbcd84-162e-477c-8005-b8abee09ff21-kube-api-access-kks5r\") pod \"cinder-db-create-xhkk8\" (UID: \"6ddbcd84-162e-477c-8005-b8abee09ff21\") " pod="openstack/cinder-db-create-xhkk8" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.425125 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ddbcd84-162e-477c-8005-b8abee09ff21-operator-scripts\") pod \"cinder-db-create-xhkk8\" (UID: \"6ddbcd84-162e-477c-8005-b8abee09ff21\") " pod="openstack/cinder-db-create-xhkk8" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.438753 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-00cb-account-create-update-pz9ks"] Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.440638 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-00cb-account-create-update-pz9ks" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.442259 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.461291 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-np8j9" event={"ID":"b06b7983-c1b9-433b-b3f7-31b07fe8df22","Type":"ContainerStarted","Data":"1994779af0e7875777cd96c2afda74a553a72cba74da39df8d39eb135fe7d067"} Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.461334 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-00cb-account-create-update-pz9ks"] Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.516116 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-np8j9" podStartSLOduration=9.516094481 podStartE2EDuration="9.516094481s" podCreationTimestamp="2026-02-28 04:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:52:48.487331956 +0000 UTC m=+1157.157457866" watchObservedRunningTime="2026-02-28 04:52:48.516094481 +0000 UTC m=+1157.186220391" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.516433 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-59z6v"] Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.517585 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-59z6v" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.526530 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ddbcd84-162e-477c-8005-b8abee09ff21-operator-scripts\") pod \"cinder-db-create-xhkk8\" (UID: \"6ddbcd84-162e-477c-8005-b8abee09ff21\") " pod="openstack/cinder-db-create-xhkk8" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.526689 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kks5r\" (UniqueName: \"kubernetes.io/projected/6ddbcd84-162e-477c-8005-b8abee09ff21-kube-api-access-kks5r\") pod \"cinder-db-create-xhkk8\" (UID: \"6ddbcd84-162e-477c-8005-b8abee09ff21\") " pod="openstack/cinder-db-create-xhkk8" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.527277 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ddbcd84-162e-477c-8005-b8abee09ff21-operator-scripts\") pod \"cinder-db-create-xhkk8\" (UID: \"6ddbcd84-162e-477c-8005-b8abee09ff21\") " pod="openstack/cinder-db-create-xhkk8" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.536724 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-59z6v"] Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.543516 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-85af-account-create-update-r6zrh"] Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.544585 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-85af-account-create-update-r6zrh" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.548307 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kks5r\" (UniqueName: \"kubernetes.io/projected/6ddbcd84-162e-477c-8005-b8abee09ff21-kube-api-access-kks5r\") pod \"cinder-db-create-xhkk8\" (UID: \"6ddbcd84-162e-477c-8005-b8abee09ff21\") " pod="openstack/cinder-db-create-xhkk8" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.548771 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.555545 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-85af-account-create-update-r6zrh"] Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.628195 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmkm7\" (UniqueName: \"kubernetes.io/projected/5fa3a83e-dbd3-4274-9267-d70f5d6d0c16-kube-api-access-zmkm7\") pod \"barbican-db-create-59z6v\" (UID: \"5fa3a83e-dbd3-4274-9267-d70f5d6d0c16\") " pod="openstack/barbican-db-create-59z6v" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.628867 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b282ed90-fa26-48d8-bb49-98036e930eb4-operator-scripts\") pod \"barbican-00cb-account-create-update-pz9ks\" (UID: \"b282ed90-fa26-48d8-bb49-98036e930eb4\") " pod="openstack/barbican-00cb-account-create-update-pz9ks" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.628947 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fa3a83e-dbd3-4274-9267-d70f5d6d0c16-operator-scripts\") pod \"barbican-db-create-59z6v\" (UID: \"5fa3a83e-dbd3-4274-9267-d70f5d6d0c16\") " pod="openstack/barbican-db-create-59z6v" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.629053 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg962\" (UniqueName: \"kubernetes.io/projected/b282ed90-fa26-48d8-bb49-98036e930eb4-kube-api-access-fg962\") pod \"barbican-00cb-account-create-update-pz9ks\" (UID: \"b282ed90-fa26-48d8-bb49-98036e930eb4\") " pod="openstack/barbican-00cb-account-create-update-pz9ks" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.637626 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-fz9cq"] Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.638929 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fz9cq" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.642958 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-dlt7p"] Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.643845 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dlt7p" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.644069 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.644175 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xhkk8" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.644316 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.644534 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.644739 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zmpcc" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.667704 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-fz9cq"] Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.673903 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dlt7p"] Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.730127 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmkm7\" (UniqueName: \"kubernetes.io/projected/5fa3a83e-dbd3-4274-9267-d70f5d6d0c16-kube-api-access-zmkm7\") pod \"barbican-db-create-59z6v\" (UID: \"5fa3a83e-dbd3-4274-9267-d70f5d6d0c16\") " pod="openstack/barbican-db-create-59z6v" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.730202 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq6dk\" (UniqueName: \"kubernetes.io/projected/5910fc74-7b13-4884-a7f8-27156a1e013c-kube-api-access-vq6dk\") pod \"cinder-85af-account-create-update-r6zrh\" (UID: \"5910fc74-7b13-4884-a7f8-27156a1e013c\") " pod="openstack/cinder-85af-account-create-update-r6zrh" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.730231 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b282ed90-fa26-48d8-bb49-98036e930eb4-operator-scripts\") pod \"barbican-00cb-account-create-update-pz9ks\" (UID: \"b282ed90-fa26-48d8-bb49-98036e930eb4\") " pod="openstack/barbican-00cb-account-create-update-pz9ks" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.730269 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fa3a83e-dbd3-4274-9267-d70f5d6d0c16-operator-scripts\") pod \"barbican-db-create-59z6v\" (UID: \"5fa3a83e-dbd3-4274-9267-d70f5d6d0c16\") " pod="openstack/barbican-db-create-59z6v" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.730296 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5910fc74-7b13-4884-a7f8-27156a1e013c-operator-scripts\") pod \"cinder-85af-account-create-update-r6zrh\" (UID: \"5910fc74-7b13-4884-a7f8-27156a1e013c\") " pod="openstack/cinder-85af-account-create-update-r6zrh" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.730326 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg962\" (UniqueName: \"kubernetes.io/projected/b282ed90-fa26-48d8-bb49-98036e930eb4-kube-api-access-fg962\") pod \"barbican-00cb-account-create-update-pz9ks\" (UID: \"b282ed90-fa26-48d8-bb49-98036e930eb4\") " pod="openstack/barbican-00cb-account-create-update-pz9ks" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.731462 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b282ed90-fa26-48d8-bb49-98036e930eb4-operator-scripts\") pod \"barbican-00cb-account-create-update-pz9ks\" (UID: \"b282ed90-fa26-48d8-bb49-98036e930eb4\") " pod="openstack/barbican-00cb-account-create-update-pz9ks" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.732245 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fa3a83e-dbd3-4274-9267-d70f5d6d0c16-operator-scripts\") pod \"barbican-db-create-59z6v\" (UID: \"5fa3a83e-dbd3-4274-9267-d70f5d6d0c16\") " pod="openstack/barbican-db-create-59z6v" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.790685 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmkm7\" (UniqueName: \"kubernetes.io/projected/5fa3a83e-dbd3-4274-9267-d70f5d6d0c16-kube-api-access-zmkm7\") pod \"barbican-db-create-59z6v\" (UID: \"5fa3a83e-dbd3-4274-9267-d70f5d6d0c16\") " pod="openstack/barbican-db-create-59z6v" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.790737 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg962\" (UniqueName: \"kubernetes.io/projected/b282ed90-fa26-48d8-bb49-98036e930eb4-kube-api-access-fg962\") pod \"barbican-00cb-account-create-update-pz9ks\" (UID: \"b282ed90-fa26-48d8-bb49-98036e930eb4\") " pod="openstack/barbican-00cb-account-create-update-pz9ks" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.833485 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8edf7176-4345-42b7-a018-1574b7fb86b8-operator-scripts\") pod \"neutron-db-create-dlt7p\" (UID: \"8edf7176-4345-42b7-a018-1574b7fb86b8\") " pod="openstack/neutron-db-create-dlt7p" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.833539 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc9xw\" (UniqueName: \"kubernetes.io/projected/94986f9e-b185-4fb3-98c1-6f02fbfc64e5-kube-api-access-xc9xw\") pod \"keystone-db-sync-fz9cq\" (UID: \"94986f9e-b185-4fb3-98c1-6f02fbfc64e5\") " pod="openstack/keystone-db-sync-fz9cq" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.833571 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-59z6v" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.833628 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq6dk\" (UniqueName: \"kubernetes.io/projected/5910fc74-7b13-4884-a7f8-27156a1e013c-kube-api-access-vq6dk\") pod \"cinder-85af-account-create-update-r6zrh\" (UID: \"5910fc74-7b13-4884-a7f8-27156a1e013c\") " pod="openstack/cinder-85af-account-create-update-r6zrh" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.833653 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94986f9e-b185-4fb3-98c1-6f02fbfc64e5-config-data\") pod \"keystone-db-sync-fz9cq\" (UID: \"94986f9e-b185-4fb3-98c1-6f02fbfc64e5\") " pod="openstack/keystone-db-sync-fz9cq" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.833672 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94986f9e-b185-4fb3-98c1-6f02fbfc64e5-combined-ca-bundle\") pod \"keystone-db-sync-fz9cq\" (UID: \"94986f9e-b185-4fb3-98c1-6f02fbfc64e5\") " pod="openstack/keystone-db-sync-fz9cq" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.833709 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdcvf\" (UniqueName: \"kubernetes.io/projected/8edf7176-4345-42b7-a018-1574b7fb86b8-kube-api-access-kdcvf\") pod \"neutron-db-create-dlt7p\" (UID: \"8edf7176-4345-42b7-a018-1574b7fb86b8\") " pod="openstack/neutron-db-create-dlt7p" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.833741 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5910fc74-7b13-4884-a7f8-27156a1e013c-operator-scripts\") pod \"cinder-85af-account-create-update-r6zrh\" (UID: \"5910fc74-7b13-4884-a7f8-27156a1e013c\") " pod="openstack/cinder-85af-account-create-update-r6zrh" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.834465 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5910fc74-7b13-4884-a7f8-27156a1e013c-operator-scripts\") pod \"cinder-85af-account-create-update-r6zrh\" (UID: \"5910fc74-7b13-4884-a7f8-27156a1e013c\") " pod="openstack/cinder-85af-account-create-update-r6zrh" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.863516 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq6dk\" (UniqueName: \"kubernetes.io/projected/5910fc74-7b13-4884-a7f8-27156a1e013c-kube-api-access-vq6dk\") pod \"cinder-85af-account-create-update-r6zrh\" (UID: \"5910fc74-7b13-4884-a7f8-27156a1e013c\") " pod="openstack/cinder-85af-account-create-update-r6zrh" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.883583 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-e8bd-account-create-update-wksxn"] Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.884630 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e8bd-account-create-update-wksxn" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.887154 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.902146 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-85af-account-create-update-r6zrh" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.936344 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8edf7176-4345-42b7-a018-1574b7fb86b8-operator-scripts\") pod \"neutron-db-create-dlt7p\" (UID: \"8edf7176-4345-42b7-a018-1574b7fb86b8\") " pod="openstack/neutron-db-create-dlt7p" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.936869 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xc9xw\" (UniqueName: \"kubernetes.io/projected/94986f9e-b185-4fb3-98c1-6f02fbfc64e5-kube-api-access-xc9xw\") pod \"keystone-db-sync-fz9cq\" (UID: \"94986f9e-b185-4fb3-98c1-6f02fbfc64e5\") " pod="openstack/keystone-db-sync-fz9cq" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.936901 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjspv\" (UniqueName: \"kubernetes.io/projected/6694f8b3-f730-49e4-8fc1-55a39e5acf4d-kube-api-access-mjspv\") pod \"neutron-e8bd-account-create-update-wksxn\" (UID: \"6694f8b3-f730-49e4-8fc1-55a39e5acf4d\") " pod="openstack/neutron-e8bd-account-create-update-wksxn" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.937055 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6694f8b3-f730-49e4-8fc1-55a39e5acf4d-operator-scripts\") pod \"neutron-e8bd-account-create-update-wksxn\" (UID: \"6694f8b3-f730-49e4-8fc1-55a39e5acf4d\") " pod="openstack/neutron-e8bd-account-create-update-wksxn" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.937145 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94986f9e-b185-4fb3-98c1-6f02fbfc64e5-config-data\") pod \"keystone-db-sync-fz9cq\" (UID: \"94986f9e-b185-4fb3-98c1-6f02fbfc64e5\") " pod="openstack/keystone-db-sync-fz9cq" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.937168 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94986f9e-b185-4fb3-98c1-6f02fbfc64e5-combined-ca-bundle\") pod \"keystone-db-sync-fz9cq\" (UID: \"94986f9e-b185-4fb3-98c1-6f02fbfc64e5\") " pod="openstack/keystone-db-sync-fz9cq" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.937248 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdcvf\" (UniqueName: \"kubernetes.io/projected/8edf7176-4345-42b7-a018-1574b7fb86b8-kube-api-access-kdcvf\") pod \"neutron-db-create-dlt7p\" (UID: \"8edf7176-4345-42b7-a018-1574b7fb86b8\") " pod="openstack/neutron-db-create-dlt7p" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.938359 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8edf7176-4345-42b7-a018-1574b7fb86b8-operator-scripts\") pod \"neutron-db-create-dlt7p\" (UID: \"8edf7176-4345-42b7-a018-1574b7fb86b8\") " pod="openstack/neutron-db-create-dlt7p" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.947686 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94986f9e-b185-4fb3-98c1-6f02fbfc64e5-combined-ca-bundle\") pod \"keystone-db-sync-fz9cq\" (UID: \"94986f9e-b185-4fb3-98c1-6f02fbfc64e5\") " pod="openstack/keystone-db-sync-fz9cq" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.947707 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94986f9e-b185-4fb3-98c1-6f02fbfc64e5-config-data\") pod \"keystone-db-sync-fz9cq\" (UID: \"94986f9e-b185-4fb3-98c1-6f02fbfc64e5\") " pod="openstack/keystone-db-sync-fz9cq" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.956619 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-e8bd-account-create-update-wksxn"] Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.964846 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdcvf\" (UniqueName: \"kubernetes.io/projected/8edf7176-4345-42b7-a018-1574b7fb86b8-kube-api-access-kdcvf\") pod \"neutron-db-create-dlt7p\" (UID: \"8edf7176-4345-42b7-a018-1574b7fb86b8\") " pod="openstack/neutron-db-create-dlt7p" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.965001 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xc9xw\" (UniqueName: \"kubernetes.io/projected/94986f9e-b185-4fb3-98c1-6f02fbfc64e5-kube-api-access-xc9xw\") pod \"keystone-db-sync-fz9cq\" (UID: \"94986f9e-b185-4fb3-98c1-6f02fbfc64e5\") " pod="openstack/keystone-db-sync-fz9cq" Feb 28 04:52:48 crc kubenswrapper[5014]: I0228 04:52:48.976198 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fz9cq" Feb 28 04:52:49 crc kubenswrapper[5014]: I0228 04:52:49.040296 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjspv\" (UniqueName: \"kubernetes.io/projected/6694f8b3-f730-49e4-8fc1-55a39e5acf4d-kube-api-access-mjspv\") pod \"neutron-e8bd-account-create-update-wksxn\" (UID: \"6694f8b3-f730-49e4-8fc1-55a39e5acf4d\") " pod="openstack/neutron-e8bd-account-create-update-wksxn" Feb 28 04:52:49 crc kubenswrapper[5014]: I0228 04:52:49.040395 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6694f8b3-f730-49e4-8fc1-55a39e5acf4d-operator-scripts\") pod \"neutron-e8bd-account-create-update-wksxn\" (UID: \"6694f8b3-f730-49e4-8fc1-55a39e5acf4d\") " pod="openstack/neutron-e8bd-account-create-update-wksxn" Feb 28 04:52:49 crc kubenswrapper[5014]: I0228 04:52:49.041434 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6694f8b3-f730-49e4-8fc1-55a39e5acf4d-operator-scripts\") pod \"neutron-e8bd-account-create-update-wksxn\" (UID: \"6694f8b3-f730-49e4-8fc1-55a39e5acf4d\") " pod="openstack/neutron-e8bd-account-create-update-wksxn" Feb 28 04:52:49 crc kubenswrapper[5014]: I0228 04:52:49.060068 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjspv\" (UniqueName: \"kubernetes.io/projected/6694f8b3-f730-49e4-8fc1-55a39e5acf4d-kube-api-access-mjspv\") pod \"neutron-e8bd-account-create-update-wksxn\" (UID: \"6694f8b3-f730-49e4-8fc1-55a39e5acf4d\") " pod="openstack/neutron-e8bd-account-create-update-wksxn" Feb 28 04:52:49 crc kubenswrapper[5014]: I0228 04:52:49.063556 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-00cb-account-create-update-pz9ks" Feb 28 04:52:49 crc kubenswrapper[5014]: I0228 04:52:49.169225 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dlt7p" Feb 28 04:52:49 crc kubenswrapper[5014]: I0228 04:52:49.235192 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e8bd-account-create-update-wksxn" Feb 28 04:52:49 crc kubenswrapper[5014]: I0228 04:52:49.321549 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-59z6v"] Feb 28 04:52:49 crc kubenswrapper[5014]: W0228 04:52:49.332938 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5fa3a83e_dbd3_4274_9267_d70f5d6d0c16.slice/crio-71e0b9e06470b78d8dd9d7c48afe1a52a9e952cb2ed764acd38fd39b2b848879 WatchSource:0}: Error finding container 71e0b9e06470b78d8dd9d7c48afe1a52a9e952cb2ed764acd38fd39b2b848879: Status 404 returned error can't find the container with id 71e0b9e06470b78d8dd9d7c48afe1a52a9e952cb2ed764acd38fd39b2b848879 Feb 28 04:52:49 crc kubenswrapper[5014]: I0228 04:52:49.340845 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-xhkk8"] Feb 28 04:52:49 crc kubenswrapper[5014]: I0228 04:52:49.469437 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-59z6v" event={"ID":"5fa3a83e-dbd3-4274-9267-d70f5d6d0c16","Type":"ContainerStarted","Data":"71e0b9e06470b78d8dd9d7c48afe1a52a9e952cb2ed764acd38fd39b2b848879"} Feb 28 04:52:49 crc kubenswrapper[5014]: I0228 04:52:49.470142 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xhkk8" event={"ID":"6ddbcd84-162e-477c-8005-b8abee09ff21","Type":"ContainerStarted","Data":"4a96462270166f4977a8e2dcbc35f8e00501d81a2e65e37f8fae44ac0362523b"} Feb 28 04:52:49 crc kubenswrapper[5014]: I0228 04:52:49.471626 5014 generic.go:334] "Generic (PLEG): container finished" podID="b06b7983-c1b9-433b-b3f7-31b07fe8df22" containerID="1994779af0e7875777cd96c2afda74a553a72cba74da39df8d39eb135fe7d067" exitCode=0 Feb 28 04:52:49 crc kubenswrapper[5014]: I0228 04:52:49.471656 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-np8j9" event={"ID":"b06b7983-c1b9-433b-b3f7-31b07fe8df22","Type":"ContainerDied","Data":"1994779af0e7875777cd96c2afda74a553a72cba74da39df8d39eb135fe7d067"} Feb 28 04:52:49 crc kubenswrapper[5014]: I0228 04:52:49.584757 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-85af-account-create-update-r6zrh"] Feb 28 04:52:49 crc kubenswrapper[5014]: I0228 04:52:49.678157 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-fz9cq"] Feb 28 04:52:49 crc kubenswrapper[5014]: W0228 04:52:49.696197 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94986f9e_b185_4fb3_98c1_6f02fbfc64e5.slice/crio-75804815ed369ad36279f69bdf3c40d24fdb07769d9ec37a9f5733a07e12e539 WatchSource:0}: Error finding container 75804815ed369ad36279f69bdf3c40d24fdb07769d9ec37a9f5733a07e12e539: Status 404 returned error can't find the container with id 75804815ed369ad36279f69bdf3c40d24fdb07769d9ec37a9f5733a07e12e539 Feb 28 04:52:49 crc kubenswrapper[5014]: I0228 04:52:49.777097 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-00cb-account-create-update-pz9ks"] Feb 28 04:52:49 crc kubenswrapper[5014]: I0228 04:52:49.879849 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-e8bd-account-create-update-wksxn"] Feb 28 04:52:49 crc kubenswrapper[5014]: W0228 04:52:49.889263 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6694f8b3_f730_49e4_8fc1_55a39e5acf4d.slice/crio-f6ff5297da9efd662a2328eeea31aa3ac4586028a4e0ddfc895fa1d2fb63df72 WatchSource:0}: Error finding container f6ff5297da9efd662a2328eeea31aa3ac4586028a4e0ddfc895fa1d2fb63df72: Status 404 returned error can't find the container with id f6ff5297da9efd662a2328eeea31aa3ac4586028a4e0ddfc895fa1d2fb63df72 Feb 28 04:52:49 crc kubenswrapper[5014]: I0228 04:52:49.914565 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dlt7p"] Feb 28 04:52:50 crc kubenswrapper[5014]: I0228 04:52:50.624988 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fz9cq" event={"ID":"94986f9e-b185-4fb3-98c1-6f02fbfc64e5","Type":"ContainerStarted","Data":"75804815ed369ad36279f69bdf3c40d24fdb07769d9ec37a9f5733a07e12e539"} Feb 28 04:52:50 crc kubenswrapper[5014]: I0228 04:52:50.626461 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-59z6v" event={"ID":"5fa3a83e-dbd3-4274-9267-d70f5d6d0c16","Type":"ContainerStarted","Data":"ef52befa051782375bee581993518c9f6cc692c3909c8137b005222ad2a69211"} Feb 28 04:52:50 crc kubenswrapper[5014]: I0228 04:52:50.627749 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xhkk8" event={"ID":"6ddbcd84-162e-477c-8005-b8abee09ff21","Type":"ContainerStarted","Data":"a3ba1f9d6c2aecf288a9e66a4321e73241e0b8862cd9f8511b263d2d494bea14"} Feb 28 04:52:50 crc kubenswrapper[5014]: I0228 04:52:50.630772 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e8bd-account-create-update-wksxn" event={"ID":"6694f8b3-f730-49e4-8fc1-55a39e5acf4d","Type":"ContainerStarted","Data":"f6ff5297da9efd662a2328eeea31aa3ac4586028a4e0ddfc895fa1d2fb63df72"} Feb 28 04:52:50 crc kubenswrapper[5014]: I0228 04:52:50.631988 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dlt7p" event={"ID":"8edf7176-4345-42b7-a018-1574b7fb86b8","Type":"ContainerStarted","Data":"43f6117873184a52b7869ce08e8633ca702f2f5803099369cb4dc93aebb9cb00"} Feb 28 04:52:50 crc kubenswrapper[5014]: I0228 04:52:50.634326 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-85af-account-create-update-r6zrh" event={"ID":"5910fc74-7b13-4884-a7f8-27156a1e013c","Type":"ContainerStarted","Data":"f0d4a17f0725f933521ea7f8a5dff7c52378d5a2c722bdedbef4ef3f0cb77c82"} Feb 28 04:52:50 crc kubenswrapper[5014]: I0228 04:52:50.634377 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-85af-account-create-update-r6zrh" event={"ID":"5910fc74-7b13-4884-a7f8-27156a1e013c","Type":"ContainerStarted","Data":"d611ed616a2f8da8f792ea430d8f6ed0646f26f0a0643883d1a641e67241caec"} Feb 28 04:52:50 crc kubenswrapper[5014]: I0228 04:52:50.635859 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-00cb-account-create-update-pz9ks" event={"ID":"b282ed90-fa26-48d8-bb49-98036e930eb4","Type":"ContainerStarted","Data":"6db1111e5c9de99a1229fca0f4833c3a55d96903b83992a2002c68471f6854ba"} Feb 28 04:52:50 crc kubenswrapper[5014]: I0228 04:52:50.635891 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-00cb-account-create-update-pz9ks" event={"ID":"b282ed90-fa26-48d8-bb49-98036e930eb4","Type":"ContainerStarted","Data":"a6ba52d620343b3aace4dbb8d13d6e4f7bb113aa90b09802fe304fb3d8894761"} Feb 28 04:52:50 crc kubenswrapper[5014]: I0228 04:52:50.644425 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-59z6v" podStartSLOduration=2.64440656 podStartE2EDuration="2.64440656s" podCreationTimestamp="2026-02-28 04:52:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:52:50.641629215 +0000 UTC m=+1159.311755135" watchObservedRunningTime="2026-02-28 04:52:50.64440656 +0000 UTC m=+1159.314532470" Feb 28 04:52:50 crc kubenswrapper[5014]: I0228 04:52:50.665770 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-00cb-account-create-update-pz9ks" podStartSLOduration=2.665749295 podStartE2EDuration="2.665749295s" podCreationTimestamp="2026-02-28 04:52:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:52:50.656114586 +0000 UTC m=+1159.326240486" watchObservedRunningTime="2026-02-28 04:52:50.665749295 +0000 UTC m=+1159.335875205" Feb 28 04:52:50 crc kubenswrapper[5014]: I0228 04:52:50.672740 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-xhkk8" podStartSLOduration=2.672721404 podStartE2EDuration="2.672721404s" podCreationTimestamp="2026-02-28 04:52:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:52:50.671870241 +0000 UTC m=+1159.341996151" watchObservedRunningTime="2026-02-28 04:52:50.672721404 +0000 UTC m=+1159.342847314" Feb 28 04:52:50 crc kubenswrapper[5014]: I0228 04:52:50.683906 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-85af-account-create-update-r6zrh" podStartSLOduration=2.683885315 podStartE2EDuration="2.683885315s" podCreationTimestamp="2026-02-28 04:52:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:52:50.68259526 +0000 UTC m=+1159.352721170" watchObservedRunningTime="2026-02-28 04:52:50.683885315 +0000 UTC m=+1159.354011235" Feb 28 04:52:51 crc kubenswrapper[5014]: I0228 04:52:51.235722 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-np8j9" Feb 28 04:52:51 crc kubenswrapper[5014]: I0228 04:52:51.425952 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwv2q\" (UniqueName: \"kubernetes.io/projected/b06b7983-c1b9-433b-b3f7-31b07fe8df22-kube-api-access-rwv2q\") pod \"b06b7983-c1b9-433b-b3f7-31b07fe8df22\" (UID: \"b06b7983-c1b9-433b-b3f7-31b07fe8df22\") " Feb 28 04:52:51 crc kubenswrapper[5014]: I0228 04:52:51.426606 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b06b7983-c1b9-433b-b3f7-31b07fe8df22-operator-scripts\") pod \"b06b7983-c1b9-433b-b3f7-31b07fe8df22\" (UID: \"b06b7983-c1b9-433b-b3f7-31b07fe8df22\") " Feb 28 04:52:51 crc kubenswrapper[5014]: I0228 04:52:51.427548 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b06b7983-c1b9-433b-b3f7-31b07fe8df22-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b06b7983-c1b9-433b-b3f7-31b07fe8df22" (UID: "b06b7983-c1b9-433b-b3f7-31b07fe8df22"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:51 crc kubenswrapper[5014]: I0228 04:52:51.430767 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b06b7983-c1b9-433b-b3f7-31b07fe8df22-kube-api-access-rwv2q" (OuterVolumeSpecName: "kube-api-access-rwv2q") pod "b06b7983-c1b9-433b-b3f7-31b07fe8df22" (UID: "b06b7983-c1b9-433b-b3f7-31b07fe8df22"). InnerVolumeSpecName "kube-api-access-rwv2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:52:51 crc kubenswrapper[5014]: I0228 04:52:51.528718 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwv2q\" (UniqueName: \"kubernetes.io/projected/b06b7983-c1b9-433b-b3f7-31b07fe8df22-kube-api-access-rwv2q\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:51 crc kubenswrapper[5014]: I0228 04:52:51.529338 5014 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b06b7983-c1b9-433b-b3f7-31b07fe8df22-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:51 crc kubenswrapper[5014]: I0228 04:52:51.663341 5014 generic.go:334] "Generic (PLEG): container finished" podID="5fa3a83e-dbd3-4274-9267-d70f5d6d0c16" containerID="ef52befa051782375bee581993518c9f6cc692c3909c8137b005222ad2a69211" exitCode=0 Feb 28 04:52:51 crc kubenswrapper[5014]: I0228 04:52:51.663457 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-59z6v" event={"ID":"5fa3a83e-dbd3-4274-9267-d70f5d6d0c16","Type":"ContainerDied","Data":"ef52befa051782375bee581993518c9f6cc692c3909c8137b005222ad2a69211"} Feb 28 04:52:51 crc kubenswrapper[5014]: I0228 04:52:51.672703 5014 generic.go:334] "Generic (PLEG): container finished" podID="6ddbcd84-162e-477c-8005-b8abee09ff21" containerID="a3ba1f9d6c2aecf288a9e66a4321e73241e0b8862cd9f8511b263d2d494bea14" exitCode=0 Feb 28 04:52:51 crc kubenswrapper[5014]: I0228 04:52:51.672790 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xhkk8" event={"ID":"6ddbcd84-162e-477c-8005-b8abee09ff21","Type":"ContainerDied","Data":"a3ba1f9d6c2aecf288a9e66a4321e73241e0b8862cd9f8511b263d2d494bea14"} Feb 28 04:52:51 crc kubenswrapper[5014]: I0228 04:52:51.686112 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2998e28e-fceb-4daa-a26c-74bffeba0d8f","Type":"ContainerStarted","Data":"8a31fd6c3d16a46c66b4f915eb5a6ce1206a9083bcb21bba8f8431b5f820276c"} Feb 28 04:52:51 crc kubenswrapper[5014]: I0228 04:52:51.686395 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2998e28e-fceb-4daa-a26c-74bffeba0d8f","Type":"ContainerStarted","Data":"975f1930fa7c63dc2b9287003a54f6500ef9a9d6985a88c81234c4b12036af90"} Feb 28 04:52:51 crc kubenswrapper[5014]: I0228 04:52:51.702290 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e8bd-account-create-update-wksxn" event={"ID":"6694f8b3-f730-49e4-8fc1-55a39e5acf4d","Type":"ContainerStarted","Data":"f44b6d11cc8abe8e734c8da218218685ee455b7f16e07f089dc3532f634cc34f"} Feb 28 04:52:51 crc kubenswrapper[5014]: I0228 04:52:51.721450 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dlt7p" event={"ID":"8edf7176-4345-42b7-a018-1574b7fb86b8","Type":"ContainerStarted","Data":"4360b205468bbbcdfa98a2ff7d2e8c075e824fbd4f9ba9ce04a1685742c487f2"} Feb 28 04:52:51 crc kubenswrapper[5014]: I0228 04:52:51.774009 5014 generic.go:334] "Generic (PLEG): container finished" podID="5910fc74-7b13-4884-a7f8-27156a1e013c" containerID="f0d4a17f0725f933521ea7f8a5dff7c52378d5a2c722bdedbef4ef3f0cb77c82" exitCode=0 Feb 28 04:52:51 crc kubenswrapper[5014]: I0228 04:52:51.774096 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-85af-account-create-update-r6zrh" event={"ID":"5910fc74-7b13-4884-a7f8-27156a1e013c","Type":"ContainerDied","Data":"f0d4a17f0725f933521ea7f8a5dff7c52378d5a2c722bdedbef4ef3f0cb77c82"} Feb 28 04:52:51 crc kubenswrapper[5014]: I0228 04:52:51.784772 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-e8bd-account-create-update-wksxn" podStartSLOduration=3.784754235 podStartE2EDuration="3.784754235s" podCreationTimestamp="2026-02-28 04:52:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:52:51.769476112 +0000 UTC m=+1160.439602022" watchObservedRunningTime="2026-02-28 04:52:51.784754235 +0000 UTC m=+1160.454880145" Feb 28 04:52:51 crc kubenswrapper[5014]: I0228 04:52:51.787896 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-np8j9" event={"ID":"b06b7983-c1b9-433b-b3f7-31b07fe8df22","Type":"ContainerDied","Data":"c5cf14520e07408f2dfab25588f475a8cfca49fe6dc054e98da8fa21aca5d9f7"} Feb 28 04:52:51 crc kubenswrapper[5014]: I0228 04:52:51.787929 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5cf14520e07408f2dfab25588f475a8cfca49fe6dc054e98da8fa21aca5d9f7" Feb 28 04:52:51 crc kubenswrapper[5014]: I0228 04:52:51.787979 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-np8j9" Feb 28 04:52:51 crc kubenswrapper[5014]: I0228 04:52:51.790918 5014 generic.go:334] "Generic (PLEG): container finished" podID="b282ed90-fa26-48d8-bb49-98036e930eb4" containerID="6db1111e5c9de99a1229fca0f4833c3a55d96903b83992a2002c68471f6854ba" exitCode=0 Feb 28 04:52:51 crc kubenswrapper[5014]: I0228 04:52:51.790957 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-00cb-account-create-update-pz9ks" event={"ID":"b282ed90-fa26-48d8-bb49-98036e930eb4","Type":"ContainerDied","Data":"6db1111e5c9de99a1229fca0f4833c3a55d96903b83992a2002c68471f6854ba"} Feb 28 04:52:51 crc kubenswrapper[5014]: I0228 04:52:51.804862 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-dlt7p" podStartSLOduration=3.804844386 podStartE2EDuration="3.804844386s" podCreationTimestamp="2026-02-28 04:52:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:52:51.798581988 +0000 UTC m=+1160.468707898" watchObservedRunningTime="2026-02-28 04:52:51.804844386 +0000 UTC m=+1160.474970296" Feb 28 04:52:52 crc kubenswrapper[5014]: I0228 04:52:52.813509 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2998e28e-fceb-4daa-a26c-74bffeba0d8f","Type":"ContainerStarted","Data":"ed81d740c0546a8c9ca53f781e6978e3fa65f123605b01cfbceaeb18d5a33b9d"} Feb 28 04:52:52 crc kubenswrapper[5014]: I0228 04:52:52.813771 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2998e28e-fceb-4daa-a26c-74bffeba0d8f","Type":"ContainerStarted","Data":"6e9526386c8e303b34b89ca626668bc10168419f184d5431f55d24c5b8bf95ae"} Feb 28 04:52:52 crc kubenswrapper[5014]: I0228 04:52:52.815038 5014 generic.go:334] "Generic (PLEG): container finished" podID="6694f8b3-f730-49e4-8fc1-55a39e5acf4d" containerID="f44b6d11cc8abe8e734c8da218218685ee455b7f16e07f089dc3532f634cc34f" exitCode=0 Feb 28 04:52:52 crc kubenswrapper[5014]: I0228 04:52:52.815080 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e8bd-account-create-update-wksxn" event={"ID":"6694f8b3-f730-49e4-8fc1-55a39e5acf4d","Type":"ContainerDied","Data":"f44b6d11cc8abe8e734c8da218218685ee455b7f16e07f089dc3532f634cc34f"} Feb 28 04:52:52 crc kubenswrapper[5014]: I0228 04:52:52.817184 5014 generic.go:334] "Generic (PLEG): container finished" podID="8edf7176-4345-42b7-a018-1574b7fb86b8" containerID="4360b205468bbbcdfa98a2ff7d2e8c075e824fbd4f9ba9ce04a1685742c487f2" exitCode=0 Feb 28 04:52:52 crc kubenswrapper[5014]: I0228 04:52:52.817272 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dlt7p" event={"ID":"8edf7176-4345-42b7-a018-1574b7fb86b8","Type":"ContainerDied","Data":"4360b205468bbbcdfa98a2ff7d2e8c075e824fbd4f9ba9ce04a1685742c487f2"} Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.535250 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-59z6v" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.536122 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.540881 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-85af-account-create-update-r6zrh" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.546207 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e8bd-account-create-update-wksxn" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.552664 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xhkk8" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.566415 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-00cb-account-create-update-pz9ks" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.588339 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dlt7p" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.658871 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5910fc74-7b13-4884-a7f8-27156a1e013c-operator-scripts\") pod \"5910fc74-7b13-4884-a7f8-27156a1e013c\" (UID: \"5910fc74-7b13-4884-a7f8-27156a1e013c\") " Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.659085 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vq6dk\" (UniqueName: \"kubernetes.io/projected/5910fc74-7b13-4884-a7f8-27156a1e013c-kube-api-access-vq6dk\") pod \"5910fc74-7b13-4884-a7f8-27156a1e013c\" (UID: \"5910fc74-7b13-4884-a7f8-27156a1e013c\") " Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.659182 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6694f8b3-f730-49e4-8fc1-55a39e5acf4d-operator-scripts\") pod \"6694f8b3-f730-49e4-8fc1-55a39e5acf4d\" (UID: \"6694f8b3-f730-49e4-8fc1-55a39e5acf4d\") " Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.659235 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmkm7\" (UniqueName: \"kubernetes.io/projected/5fa3a83e-dbd3-4274-9267-d70f5d6d0c16-kube-api-access-zmkm7\") pod \"5fa3a83e-dbd3-4274-9267-d70f5d6d0c16\" (UID: \"5fa3a83e-dbd3-4274-9267-d70f5d6d0c16\") " Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.659268 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ddbcd84-162e-477c-8005-b8abee09ff21-operator-scripts\") pod \"6ddbcd84-162e-477c-8005-b8abee09ff21\" (UID: \"6ddbcd84-162e-477c-8005-b8abee09ff21\") " Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.659340 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjspv\" (UniqueName: \"kubernetes.io/projected/6694f8b3-f730-49e4-8fc1-55a39e5acf4d-kube-api-access-mjspv\") pod \"6694f8b3-f730-49e4-8fc1-55a39e5acf4d\" (UID: \"6694f8b3-f730-49e4-8fc1-55a39e5acf4d\") " Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.659376 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5910fc74-7b13-4884-a7f8-27156a1e013c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5910fc74-7b13-4884-a7f8-27156a1e013c" (UID: "5910fc74-7b13-4884-a7f8-27156a1e013c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.659405 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kks5r\" (UniqueName: \"kubernetes.io/projected/6ddbcd84-162e-477c-8005-b8abee09ff21-kube-api-access-kks5r\") pod \"6ddbcd84-162e-477c-8005-b8abee09ff21\" (UID: \"6ddbcd84-162e-477c-8005-b8abee09ff21\") " Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.659675 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fa3a83e-dbd3-4274-9267-d70f5d6d0c16-operator-scripts\") pod \"5fa3a83e-dbd3-4274-9267-d70f5d6d0c16\" (UID: \"5fa3a83e-dbd3-4274-9267-d70f5d6d0c16\") " Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.660004 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6694f8b3-f730-49e4-8fc1-55a39e5acf4d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6694f8b3-f730-49e4-8fc1-55a39e5acf4d" (UID: "6694f8b3-f730-49e4-8fc1-55a39e5acf4d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.660341 5014 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5910fc74-7b13-4884-a7f8-27156a1e013c-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.660359 5014 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6694f8b3-f730-49e4-8fc1-55a39e5acf4d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.663013 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ddbcd84-162e-477c-8005-b8abee09ff21-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6ddbcd84-162e-477c-8005-b8abee09ff21" (UID: "6ddbcd84-162e-477c-8005-b8abee09ff21"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.663351 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fa3a83e-dbd3-4274-9267-d70f5d6d0c16-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5fa3a83e-dbd3-4274-9267-d70f5d6d0c16" (UID: "5fa3a83e-dbd3-4274-9267-d70f5d6d0c16"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.667379 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fa3a83e-dbd3-4274-9267-d70f5d6d0c16-kube-api-access-zmkm7" (OuterVolumeSpecName: "kube-api-access-zmkm7") pod "5fa3a83e-dbd3-4274-9267-d70f5d6d0c16" (UID: "5fa3a83e-dbd3-4274-9267-d70f5d6d0c16"). InnerVolumeSpecName "kube-api-access-zmkm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.667434 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5910fc74-7b13-4884-a7f8-27156a1e013c-kube-api-access-vq6dk" (OuterVolumeSpecName: "kube-api-access-vq6dk") pod "5910fc74-7b13-4884-a7f8-27156a1e013c" (UID: "5910fc74-7b13-4884-a7f8-27156a1e013c"). InnerVolumeSpecName "kube-api-access-vq6dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.671788 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ddbcd84-162e-477c-8005-b8abee09ff21-kube-api-access-kks5r" (OuterVolumeSpecName: "kube-api-access-kks5r") pod "6ddbcd84-162e-477c-8005-b8abee09ff21" (UID: "6ddbcd84-162e-477c-8005-b8abee09ff21"). InnerVolumeSpecName "kube-api-access-kks5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.691019 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6694f8b3-f730-49e4-8fc1-55a39e5acf4d-kube-api-access-mjspv" (OuterVolumeSpecName: "kube-api-access-mjspv") pod "6694f8b3-f730-49e4-8fc1-55a39e5acf4d" (UID: "6694f8b3-f730-49e4-8fc1-55a39e5acf4d"). InnerVolumeSpecName "kube-api-access-mjspv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.761823 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b282ed90-fa26-48d8-bb49-98036e930eb4-operator-scripts\") pod \"b282ed90-fa26-48d8-bb49-98036e930eb4\" (UID: \"b282ed90-fa26-48d8-bb49-98036e930eb4\") " Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.762219 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdcvf\" (UniqueName: \"kubernetes.io/projected/8edf7176-4345-42b7-a018-1574b7fb86b8-kube-api-access-kdcvf\") pod \"8edf7176-4345-42b7-a018-1574b7fb86b8\" (UID: \"8edf7176-4345-42b7-a018-1574b7fb86b8\") " Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.762324 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b282ed90-fa26-48d8-bb49-98036e930eb4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b282ed90-fa26-48d8-bb49-98036e930eb4" (UID: "b282ed90-fa26-48d8-bb49-98036e930eb4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.762248 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8edf7176-4345-42b7-a018-1574b7fb86b8-operator-scripts\") pod \"8edf7176-4345-42b7-a018-1574b7fb86b8\" (UID: \"8edf7176-4345-42b7-a018-1574b7fb86b8\") " Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.762850 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8edf7176-4345-42b7-a018-1574b7fb86b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8edf7176-4345-42b7-a018-1574b7fb86b8" (UID: "8edf7176-4345-42b7-a018-1574b7fb86b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.762867 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg962\" (UniqueName: \"kubernetes.io/projected/b282ed90-fa26-48d8-bb49-98036e930eb4-kube-api-access-fg962\") pod \"b282ed90-fa26-48d8-bb49-98036e930eb4\" (UID: \"b282ed90-fa26-48d8-bb49-98036e930eb4\") " Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.763286 5014 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b282ed90-fa26-48d8-bb49-98036e930eb4-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.763309 5014 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8edf7176-4345-42b7-a018-1574b7fb86b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.763321 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vq6dk\" (UniqueName: \"kubernetes.io/projected/5910fc74-7b13-4884-a7f8-27156a1e013c-kube-api-access-vq6dk\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.763336 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmkm7\" (UniqueName: \"kubernetes.io/projected/5fa3a83e-dbd3-4274-9267-d70f5d6d0c16-kube-api-access-zmkm7\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.763348 5014 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6ddbcd84-162e-477c-8005-b8abee09ff21-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.763360 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjspv\" (UniqueName: \"kubernetes.io/projected/6694f8b3-f730-49e4-8fc1-55a39e5acf4d-kube-api-access-mjspv\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.763371 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kks5r\" (UniqueName: \"kubernetes.io/projected/6ddbcd84-162e-477c-8005-b8abee09ff21-kube-api-access-kks5r\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.763382 5014 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5fa3a83e-dbd3-4274-9267-d70f5d6d0c16-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.765395 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8edf7176-4345-42b7-a018-1574b7fb86b8-kube-api-access-kdcvf" (OuterVolumeSpecName: "kube-api-access-kdcvf") pod "8edf7176-4345-42b7-a018-1574b7fb86b8" (UID: "8edf7176-4345-42b7-a018-1574b7fb86b8"). InnerVolumeSpecName "kube-api-access-kdcvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.765996 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b282ed90-fa26-48d8-bb49-98036e930eb4-kube-api-access-fg962" (OuterVolumeSpecName: "kube-api-access-fg962") pod "b282ed90-fa26-48d8-bb49-98036e930eb4" (UID: "b282ed90-fa26-48d8-bb49-98036e930eb4"). InnerVolumeSpecName "kube-api-access-fg962". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.865344 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg962\" (UniqueName: \"kubernetes.io/projected/b282ed90-fa26-48d8-bb49-98036e930eb4-kube-api-access-fg962\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.865386 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdcvf\" (UniqueName: \"kubernetes.io/projected/8edf7176-4345-42b7-a018-1574b7fb86b8-kube-api-access-kdcvf\") on node \"crc\" DevicePath \"\"" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.958028 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-00cb-account-create-update-pz9ks" event={"ID":"b282ed90-fa26-48d8-bb49-98036e930eb4","Type":"ContainerDied","Data":"a6ba52d620343b3aace4dbb8d13d6e4f7bb113aa90b09802fe304fb3d8894761"} Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.958110 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6ba52d620343b3aace4dbb8d13d6e4f7bb113aa90b09802fe304fb3d8894761" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.958044 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-00cb-account-create-update-pz9ks" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.959593 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-59z6v" event={"ID":"5fa3a83e-dbd3-4274-9267-d70f5d6d0c16","Type":"ContainerDied","Data":"71e0b9e06470b78d8dd9d7c48afe1a52a9e952cb2ed764acd38fd39b2b848879"} Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.959627 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71e0b9e06470b78d8dd9d7c48afe1a52a9e952cb2ed764acd38fd39b2b848879" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.959989 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-59z6v" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.961586 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-xhkk8" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.961591 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-xhkk8" event={"ID":"6ddbcd84-162e-477c-8005-b8abee09ff21","Type":"ContainerDied","Data":"4a96462270166f4977a8e2dcbc35f8e00501d81a2e65e37f8fae44ac0362523b"} Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.961627 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a96462270166f4977a8e2dcbc35f8e00501d81a2e65e37f8fae44ac0362523b" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.964407 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e8bd-account-create-update-wksxn" event={"ID":"6694f8b3-f730-49e4-8fc1-55a39e5acf4d","Type":"ContainerDied","Data":"f6ff5297da9efd662a2328eeea31aa3ac4586028a4e0ddfc895fa1d2fb63df72"} Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.964441 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6ff5297da9efd662a2328eeea31aa3ac4586028a4e0ddfc895fa1d2fb63df72" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.964424 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e8bd-account-create-update-wksxn" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.967017 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dlt7p" event={"ID":"8edf7176-4345-42b7-a018-1574b7fb86b8","Type":"ContainerDied","Data":"43f6117873184a52b7869ce08e8633ca702f2f5803099369cb4dc93aebb9cb00"} Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.967064 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43f6117873184a52b7869ce08e8633ca702f2f5803099369cb4dc93aebb9cb00" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.967119 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dlt7p" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.973972 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-85af-account-create-update-r6zrh" event={"ID":"5910fc74-7b13-4884-a7f8-27156a1e013c","Type":"ContainerDied","Data":"d611ed616a2f8da8f792ea430d8f6ed0646f26f0a0643883d1a641e67241caec"} Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.974013 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d611ed616a2f8da8f792ea430d8f6ed0646f26f0a0643883d1a641e67241caec" Feb 28 04:52:56 crc kubenswrapper[5014]: I0228 04:52:56.974079 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-85af-account-create-update-r6zrh" Feb 28 04:52:58 crc kubenswrapper[5014]: I0228 04:52:58.633668 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:53:03 crc kubenswrapper[5014]: I0228 04:53:03.377532 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fz9cq" event={"ID":"94986f9e-b185-4fb3-98c1-6f02fbfc64e5","Type":"ContainerStarted","Data":"ceead62a11cec3d18ea3e806ba189b0f76d1ffeb85fbd2edeeb5f9ac23c786e5"} Feb 28 04:53:03 crc kubenswrapper[5014]: I0228 04:53:03.396460 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-fz9cq" podStartSLOduration=2.28730576 podStartE2EDuration="15.396440562s" podCreationTimestamp="2026-02-28 04:52:48 +0000 UTC" firstStartedPulling="2026-02-28 04:52:49.699777895 +0000 UTC m=+1158.369903805" lastFinishedPulling="2026-02-28 04:53:02.808912697 +0000 UTC m=+1171.479038607" observedRunningTime="2026-02-28 04:53:03.395087395 +0000 UTC m=+1172.065213305" watchObservedRunningTime="2026-02-28 04:53:03.396440562 +0000 UTC m=+1172.066566472" Feb 28 04:53:04 crc kubenswrapper[5014]: I0228 04:53:04.389270 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2998e28e-fceb-4daa-a26c-74bffeba0d8f","Type":"ContainerStarted","Data":"713c91e078047c6c6e347992da5a714bf95a5ce4c39311c8bd118b50453c8c73"} Feb 28 04:53:05 crc kubenswrapper[5014]: I0228 04:53:05.406315 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2998e28e-fceb-4daa-a26c-74bffeba0d8f","Type":"ContainerStarted","Data":"616dbf666bc32e4c8f245a9836ceb2823f50a602ce63305c923bc61ce6f47006"} Feb 28 04:53:05 crc kubenswrapper[5014]: I0228 04:53:05.406680 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2998e28e-fceb-4daa-a26c-74bffeba0d8f","Type":"ContainerStarted","Data":"6522645c04e60741fb11bbad76439fe97b901f6440c2f2b7d5a7d53a143a0888"} Feb 28 04:53:05 crc kubenswrapper[5014]: I0228 04:53:05.406694 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2998e28e-fceb-4daa-a26c-74bffeba0d8f","Type":"ContainerStarted","Data":"4f58d10f351656a0bef1ed701b94ffa59ffbfcc0b7e78a983a043ccbae07d6ba"} Feb 28 04:53:07 crc kubenswrapper[5014]: I0228 04:53:07.428310 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2998e28e-fceb-4daa-a26c-74bffeba0d8f","Type":"ContainerStarted","Data":"7f9bcf7bcaf18b78552efdd234114ba26ae5b9fd3614e320406ad1b264dba8d1"} Feb 28 04:53:07 crc kubenswrapper[5014]: I0228 04:53:07.429177 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2998e28e-fceb-4daa-a26c-74bffeba0d8f","Type":"ContainerStarted","Data":"9dbc8621d0b27d714c68469a9f2dc85363d2a6d00ceb9605c0ef007a612cd6ae"} Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.450301 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2998e28e-fceb-4daa-a26c-74bffeba0d8f","Type":"ContainerStarted","Data":"8301a2ade5ed1de62215ce7f41dd03f2db701b0ca82d10518ab8d3785f9fd33f"} Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.450715 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2998e28e-fceb-4daa-a26c-74bffeba0d8f","Type":"ContainerStarted","Data":"95a95d59d8ffd262bb6a3008e666c7c2a02fb7932e4a543b0dd8f38d2af7138c"} Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.450736 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2998e28e-fceb-4daa-a26c-74bffeba0d8f","Type":"ContainerStarted","Data":"18e90d81657ef04bd7394438d4bbaf0d07dd6fe19cfe27219106dc23f6d99429"} Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.450753 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2998e28e-fceb-4daa-a26c-74bffeba0d8f","Type":"ContainerStarted","Data":"b976a1e9d359eec2563122f183d58841916e3558a2efd738bb1d73c9c8ec99f2"} Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.450774 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"2998e28e-fceb-4daa-a26c-74bffeba0d8f","Type":"ContainerStarted","Data":"7655f4c94572bd260fb5af0b2cd90fda7426c0b4fb95090edf99f413429ab0d2"} Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.457048 5014 generic.go:334] "Generic (PLEG): container finished" podID="94986f9e-b185-4fb3-98c1-6f02fbfc64e5" containerID="ceead62a11cec3d18ea3e806ba189b0f76d1ffeb85fbd2edeeb5f9ac23c786e5" exitCode=0 Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.457152 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fz9cq" event={"ID":"94986f9e-b185-4fb3-98c1-6f02fbfc64e5","Type":"ContainerDied","Data":"ceead62a11cec3d18ea3e806ba189b0f76d1ffeb85fbd2edeeb5f9ac23c786e5"} Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.516130 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=35.859916233999996 podStartE2EDuration="55.516107495s" podCreationTimestamp="2026-02-28 04:52:13 +0000 UTC" firstStartedPulling="2026-02-28 04:52:47.293704525 +0000 UTC m=+1155.963830435" lastFinishedPulling="2026-02-28 04:53:06.949895796 +0000 UTC m=+1175.620021696" observedRunningTime="2026-02-28 04:53:08.508173952 +0000 UTC m=+1177.178299862" watchObservedRunningTime="2026-02-28 04:53:08.516107495 +0000 UTC m=+1177.186233425" Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.789732 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-vqwf8"] Feb 28 04:53:08 crc kubenswrapper[5014]: E0228 04:53:08.790279 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6694f8b3-f730-49e4-8fc1-55a39e5acf4d" containerName="mariadb-account-create-update" Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.790354 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="6694f8b3-f730-49e4-8fc1-55a39e5acf4d" containerName="mariadb-account-create-update" Feb 28 04:53:08 crc kubenswrapper[5014]: E0228 04:53:08.790435 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b06b7983-c1b9-433b-b3f7-31b07fe8df22" containerName="mariadb-account-create-update" Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.790512 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="b06b7983-c1b9-433b-b3f7-31b07fe8df22" containerName="mariadb-account-create-update" Feb 28 04:53:08 crc kubenswrapper[5014]: E0228 04:53:08.790583 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ddbcd84-162e-477c-8005-b8abee09ff21" containerName="mariadb-database-create" Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.790641 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ddbcd84-162e-477c-8005-b8abee09ff21" containerName="mariadb-database-create" Feb 28 04:53:08 crc kubenswrapper[5014]: E0228 04:53:08.790780 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8edf7176-4345-42b7-a018-1574b7fb86b8" containerName="mariadb-database-create" Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.790863 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="8edf7176-4345-42b7-a018-1574b7fb86b8" containerName="mariadb-database-create" Feb 28 04:53:08 crc kubenswrapper[5014]: E0228 04:53:08.790926 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b282ed90-fa26-48d8-bb49-98036e930eb4" containerName="mariadb-account-create-update" Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.790984 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="b282ed90-fa26-48d8-bb49-98036e930eb4" containerName="mariadb-account-create-update" Feb 28 04:53:08 crc kubenswrapper[5014]: E0228 04:53:08.791045 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fa3a83e-dbd3-4274-9267-d70f5d6d0c16" containerName="mariadb-database-create" Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.791121 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fa3a83e-dbd3-4274-9267-d70f5d6d0c16" containerName="mariadb-database-create" Feb 28 04:53:08 crc kubenswrapper[5014]: E0228 04:53:08.791197 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5910fc74-7b13-4884-a7f8-27156a1e013c" containerName="mariadb-account-create-update" Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.791258 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="5910fc74-7b13-4884-a7f8-27156a1e013c" containerName="mariadb-account-create-update" Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.791452 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ddbcd84-162e-477c-8005-b8abee09ff21" containerName="mariadb-database-create" Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.791525 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="8edf7176-4345-42b7-a018-1574b7fb86b8" containerName="mariadb-database-create" Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.791586 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="b282ed90-fa26-48d8-bb49-98036e930eb4" containerName="mariadb-account-create-update" Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.791652 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fa3a83e-dbd3-4274-9267-d70f5d6d0c16" containerName="mariadb-database-create" Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.791719 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="b06b7983-c1b9-433b-b3f7-31b07fe8df22" containerName="mariadb-account-create-update" Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.791783 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="6694f8b3-f730-49e4-8fc1-55a39e5acf4d" containerName="mariadb-account-create-update" Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.791869 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="5910fc74-7b13-4884-a7f8-27156a1e013c" containerName="mariadb-account-create-update" Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.792662 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.794672 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.803851 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-vqwf8"] Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.918886 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-dns-svc\") pod \"dnsmasq-dns-764c5664d7-vqwf8\" (UID: \"080ff429-38a5-459f-b650-9090593c1da1\") " pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.918943 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfc5n\" (UniqueName: \"kubernetes.io/projected/080ff429-38a5-459f-b650-9090593c1da1-kube-api-access-lfc5n\") pod \"dnsmasq-dns-764c5664d7-vqwf8\" (UID: \"080ff429-38a5-459f-b650-9090593c1da1\") " pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.918991 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-config\") pod \"dnsmasq-dns-764c5664d7-vqwf8\" (UID: \"080ff429-38a5-459f-b650-9090593c1da1\") " pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.919021 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-vqwf8\" (UID: \"080ff429-38a5-459f-b650-9090593c1da1\") " pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.919119 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-vqwf8\" (UID: \"080ff429-38a5-459f-b650-9090593c1da1\") " pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" Feb 28 04:53:08 crc kubenswrapper[5014]: I0228 04:53:08.919144 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-vqwf8\" (UID: \"080ff429-38a5-459f-b650-9090593c1da1\") " pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" Feb 28 04:53:09 crc kubenswrapper[5014]: I0228 04:53:09.023844 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-config\") pod \"dnsmasq-dns-764c5664d7-vqwf8\" (UID: \"080ff429-38a5-459f-b650-9090593c1da1\") " pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" Feb 28 04:53:09 crc kubenswrapper[5014]: I0228 04:53:09.023902 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-vqwf8\" (UID: \"080ff429-38a5-459f-b650-9090593c1da1\") " pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" Feb 28 04:53:09 crc kubenswrapper[5014]: I0228 04:53:09.023958 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-vqwf8\" (UID: \"080ff429-38a5-459f-b650-9090593c1da1\") " pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" Feb 28 04:53:09 crc kubenswrapper[5014]: I0228 04:53:09.023982 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-vqwf8\" (UID: \"080ff429-38a5-459f-b650-9090593c1da1\") " pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" Feb 28 04:53:09 crc kubenswrapper[5014]: I0228 04:53:09.024022 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-dns-svc\") pod \"dnsmasq-dns-764c5664d7-vqwf8\" (UID: \"080ff429-38a5-459f-b650-9090593c1da1\") " pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" Feb 28 04:53:09 crc kubenswrapper[5014]: I0228 04:53:09.024051 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfc5n\" (UniqueName: \"kubernetes.io/projected/080ff429-38a5-459f-b650-9090593c1da1-kube-api-access-lfc5n\") pod \"dnsmasq-dns-764c5664d7-vqwf8\" (UID: \"080ff429-38a5-459f-b650-9090593c1da1\") " pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" Feb 28 04:53:09 crc kubenswrapper[5014]: I0228 04:53:09.024985 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-config\") pod \"dnsmasq-dns-764c5664d7-vqwf8\" (UID: \"080ff429-38a5-459f-b650-9090593c1da1\") " pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" Feb 28 04:53:09 crc kubenswrapper[5014]: I0228 04:53:09.024982 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-vqwf8\" (UID: \"080ff429-38a5-459f-b650-9090593c1da1\") " pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" Feb 28 04:53:09 crc kubenswrapper[5014]: I0228 04:53:09.025061 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-dns-svc\") pod \"dnsmasq-dns-764c5664d7-vqwf8\" (UID: \"080ff429-38a5-459f-b650-9090593c1da1\") " pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" Feb 28 04:53:09 crc kubenswrapper[5014]: I0228 04:53:09.025300 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-vqwf8\" (UID: \"080ff429-38a5-459f-b650-9090593c1da1\") " pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" Feb 28 04:53:09 crc kubenswrapper[5014]: I0228 04:53:09.025577 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-vqwf8\" (UID: \"080ff429-38a5-459f-b650-9090593c1da1\") " pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" Feb 28 04:53:09 crc kubenswrapper[5014]: I0228 04:53:09.047303 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfc5n\" (UniqueName: \"kubernetes.io/projected/080ff429-38a5-459f-b650-9090593c1da1-kube-api-access-lfc5n\") pod \"dnsmasq-dns-764c5664d7-vqwf8\" (UID: \"080ff429-38a5-459f-b650-9090593c1da1\") " pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" Feb 28 04:53:09 crc kubenswrapper[5014]: I0228 04:53:09.110408 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" Feb 28 04:53:09 crc kubenswrapper[5014]: I0228 04:53:09.563625 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-vqwf8"] Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:09.717930 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fz9cq" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:09.837171 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94986f9e-b185-4fb3-98c1-6f02fbfc64e5-config-data\") pod \"94986f9e-b185-4fb3-98c1-6f02fbfc64e5\" (UID: \"94986f9e-b185-4fb3-98c1-6f02fbfc64e5\") " Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:09.837314 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94986f9e-b185-4fb3-98c1-6f02fbfc64e5-combined-ca-bundle\") pod \"94986f9e-b185-4fb3-98c1-6f02fbfc64e5\" (UID: \"94986f9e-b185-4fb3-98c1-6f02fbfc64e5\") " Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:09.837405 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc9xw\" (UniqueName: \"kubernetes.io/projected/94986f9e-b185-4fb3-98c1-6f02fbfc64e5-kube-api-access-xc9xw\") pod \"94986f9e-b185-4fb3-98c1-6f02fbfc64e5\" (UID: \"94986f9e-b185-4fb3-98c1-6f02fbfc64e5\") " Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:09.843502 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94986f9e-b185-4fb3-98c1-6f02fbfc64e5-kube-api-access-xc9xw" (OuterVolumeSpecName: "kube-api-access-xc9xw") pod "94986f9e-b185-4fb3-98c1-6f02fbfc64e5" (UID: "94986f9e-b185-4fb3-98c1-6f02fbfc64e5"). InnerVolumeSpecName "kube-api-access-xc9xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:09.876497 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94986f9e-b185-4fb3-98c1-6f02fbfc64e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94986f9e-b185-4fb3-98c1-6f02fbfc64e5" (UID: "94986f9e-b185-4fb3-98c1-6f02fbfc64e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:09.904101 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94986f9e-b185-4fb3-98c1-6f02fbfc64e5-config-data" (OuterVolumeSpecName: "config-data") pod "94986f9e-b185-4fb3-98c1-6f02fbfc64e5" (UID: "94986f9e-b185-4fb3-98c1-6f02fbfc64e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:09.940412 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xc9xw\" (UniqueName: \"kubernetes.io/projected/94986f9e-b185-4fb3-98c1-6f02fbfc64e5-kube-api-access-xc9xw\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:09.940456 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94986f9e-b185-4fb3-98c1-6f02fbfc64e5-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:09.940466 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94986f9e-b185-4fb3-98c1-6f02fbfc64e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.475086 5014 generic.go:334] "Generic (PLEG): container finished" podID="d232d598-8b65-47f6-a5dc-9d77d37d9b80" containerID="d0a82c59ea00be18e303205194b256bdc9ef9541536c4aa13de12fb8aadfcf04" exitCode=0 Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.475154 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tz5jx" event={"ID":"d232d598-8b65-47f6-a5dc-9d77d37d9b80","Type":"ContainerDied","Data":"d0a82c59ea00be18e303205194b256bdc9ef9541536c4aa13de12fb8aadfcf04"} Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.477011 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-fz9cq" event={"ID":"94986f9e-b185-4fb3-98c1-6f02fbfc64e5","Type":"ContainerDied","Data":"75804815ed369ad36279f69bdf3c40d24fdb07769d9ec37a9f5733a07e12e539"} Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.477040 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75804815ed369ad36279f69bdf3c40d24fdb07769d9ec37a9f5733a07e12e539" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.477084 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-fz9cq" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.481417 5014 generic.go:334] "Generic (PLEG): container finished" podID="080ff429-38a5-459f-b650-9090593c1da1" containerID="8788d56d128881efde211114b2e42f10824c84a67c38a8f3a37237ee67632811" exitCode=0 Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.481448 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" event={"ID":"080ff429-38a5-459f-b650-9090593c1da1","Type":"ContainerDied","Data":"8788d56d128881efde211114b2e42f10824c84a67c38a8f3a37237ee67632811"} Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.481466 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" event={"ID":"080ff429-38a5-459f-b650-9090593c1da1","Type":"ContainerStarted","Data":"34ab4169f2a25cae64c0619b4e80b04e6a60cdb8c3bd392488a88e1c0ddd6fa0"} Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.778905 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-vqwf8"] Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.798885 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-mzbcc"] Feb 28 04:53:10 crc kubenswrapper[5014]: E0228 04:53:10.799274 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94986f9e-b185-4fb3-98c1-6f02fbfc64e5" containerName="keystone-db-sync" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.799292 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="94986f9e-b185-4fb3-98c1-6f02fbfc64e5" containerName="keystone-db-sync" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.799465 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="94986f9e-b185-4fb3-98c1-6f02fbfc64e5" containerName="keystone-db-sync" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.799969 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mzbcc" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.802651 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.802932 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.803104 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.803223 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zmpcc" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.803319 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.808444 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mzbcc"] Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.827575 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-8bxvr"] Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.830127 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.940166 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-8bxvr"] Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.982982 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsznk\" (UniqueName: \"kubernetes.io/projected/ce75899e-c98a-4483-940a-f5b67166ced5-kube-api-access-tsznk\") pod \"dnsmasq-dns-5959f8865f-8bxvr\" (UID: \"ce75899e-c98a-4483-940a-f5b67166ced5\") " pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.983133 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-config\") pod \"dnsmasq-dns-5959f8865f-8bxvr\" (UID: \"ce75899e-c98a-4483-940a-f5b67166ced5\") " pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.986554 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-8bxvr\" (UID: \"ce75899e-c98a-4483-940a-f5b67166ced5\") " pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.986624 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hgdh\" (UniqueName: \"kubernetes.io/projected/c00825d9-a4c0-40d9-b77c-e1661747f42d-kube-api-access-7hgdh\") pod \"keystone-bootstrap-mzbcc\" (UID: \"c00825d9-a4c0-40d9-b77c-e1661747f42d\") " pod="openstack/keystone-bootstrap-mzbcc" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.986732 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-credential-keys\") pod \"keystone-bootstrap-mzbcc\" (UID: \"c00825d9-a4c0-40d9-b77c-e1661747f42d\") " pod="openstack/keystone-bootstrap-mzbcc" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.986850 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-8bxvr\" (UID: \"ce75899e-c98a-4483-940a-f5b67166ced5\") " pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.986889 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-scripts\") pod \"keystone-bootstrap-mzbcc\" (UID: \"c00825d9-a4c0-40d9-b77c-e1661747f42d\") " pod="openstack/keystone-bootstrap-mzbcc" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.986971 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-config-data\") pod \"keystone-bootstrap-mzbcc\" (UID: \"c00825d9-a4c0-40d9-b77c-e1661747f42d\") " pod="openstack/keystone-bootstrap-mzbcc" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.987005 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-combined-ca-bundle\") pod \"keystone-bootstrap-mzbcc\" (UID: \"c00825d9-a4c0-40d9-b77c-e1661747f42d\") " pod="openstack/keystone-bootstrap-mzbcc" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.987055 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-dns-svc\") pod \"dnsmasq-dns-5959f8865f-8bxvr\" (UID: \"ce75899e-c98a-4483-940a-f5b67166ced5\") " pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.987103 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-8bxvr\" (UID: \"ce75899e-c98a-4483-940a-f5b67166ced5\") " pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" Feb 28 04:53:10 crc kubenswrapper[5014]: I0228 04:53:10.987131 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-fernet-keys\") pod \"keystone-bootstrap-mzbcc\" (UID: \"c00825d9-a4c0-40d9-b77c-e1661747f42d\") " pod="openstack/keystone-bootstrap-mzbcc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.047876 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6d9d97cc85-t5jdm"] Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.049264 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d9d97cc85-t5jdm" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.054596 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.055123 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.063551 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-pqrbm" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.063789 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.081733 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d9d97cc85-t5jdm"] Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.095963 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-credential-keys\") pod \"keystone-bootstrap-mzbcc\" (UID: \"c00825d9-a4c0-40d9-b77c-e1661747f42d\") " pod="openstack/keystone-bootstrap-mzbcc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.096080 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-8bxvr\" (UID: \"ce75899e-c98a-4483-940a-f5b67166ced5\") " pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.096109 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-scripts\") pod \"keystone-bootstrap-mzbcc\" (UID: \"c00825d9-a4c0-40d9-b77c-e1661747f42d\") " pod="openstack/keystone-bootstrap-mzbcc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.096164 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-config-data\") pod \"keystone-bootstrap-mzbcc\" (UID: \"c00825d9-a4c0-40d9-b77c-e1661747f42d\") " pod="openstack/keystone-bootstrap-mzbcc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.096187 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-combined-ca-bundle\") pod \"keystone-bootstrap-mzbcc\" (UID: \"c00825d9-a4c0-40d9-b77c-e1661747f42d\") " pod="openstack/keystone-bootstrap-mzbcc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.096235 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-dns-svc\") pod \"dnsmasq-dns-5959f8865f-8bxvr\" (UID: \"ce75899e-c98a-4483-940a-f5b67166ced5\") " pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.096265 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-8bxvr\" (UID: \"ce75899e-c98a-4483-940a-f5b67166ced5\") " pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.096285 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-fernet-keys\") pod \"keystone-bootstrap-mzbcc\" (UID: \"c00825d9-a4c0-40d9-b77c-e1661747f42d\") " pod="openstack/keystone-bootstrap-mzbcc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.096356 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsznk\" (UniqueName: \"kubernetes.io/projected/ce75899e-c98a-4483-940a-f5b67166ced5-kube-api-access-tsznk\") pod \"dnsmasq-dns-5959f8865f-8bxvr\" (UID: \"ce75899e-c98a-4483-940a-f5b67166ced5\") " pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.096421 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-config\") pod \"dnsmasq-dns-5959f8865f-8bxvr\" (UID: \"ce75899e-c98a-4483-940a-f5b67166ced5\") " pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.096504 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-8bxvr\" (UID: \"ce75899e-c98a-4483-940a-f5b67166ced5\") " pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.096530 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hgdh\" (UniqueName: \"kubernetes.io/projected/c00825d9-a4c0-40d9-b77c-e1661747f42d-kube-api-access-7hgdh\") pod \"keystone-bootstrap-mzbcc\" (UID: \"c00825d9-a4c0-40d9-b77c-e1661747f42d\") " pod="openstack/keystone-bootstrap-mzbcc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.099481 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-dns-svc\") pod \"dnsmasq-dns-5959f8865f-8bxvr\" (UID: \"ce75899e-c98a-4483-940a-f5b67166ced5\") " pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.100026 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-ovsdbserver-sb\") pod \"dnsmasq-dns-5959f8865f-8bxvr\" (UID: \"ce75899e-c98a-4483-940a-f5b67166ced5\") " pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.105030 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-dns-swift-storage-0\") pod \"dnsmasq-dns-5959f8865f-8bxvr\" (UID: \"ce75899e-c98a-4483-940a-f5b67166ced5\") " pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.106651 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-config\") pod \"dnsmasq-dns-5959f8865f-8bxvr\" (UID: \"ce75899e-c98a-4483-940a-f5b67166ced5\") " pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.108196 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.108908 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-ovsdbserver-nb\") pod \"dnsmasq-dns-5959f8865f-8bxvr\" (UID: \"ce75899e-c98a-4483-940a-f5b67166ced5\") " pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.109266 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-credential-keys\") pod \"keystone-bootstrap-mzbcc\" (UID: \"c00825d9-a4c0-40d9-b77c-e1661747f42d\") " pod="openstack/keystone-bootstrap-mzbcc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.110530 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.113376 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-combined-ca-bundle\") pod \"keystone-bootstrap-mzbcc\" (UID: \"c00825d9-a4c0-40d9-b77c-e1661747f42d\") " pod="openstack/keystone-bootstrap-mzbcc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.120212 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.121562 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-fernet-keys\") pod \"keystone-bootstrap-mzbcc\" (UID: \"c00825d9-a4c0-40d9-b77c-e1661747f42d\") " pod="openstack/keystone-bootstrap-mzbcc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.127167 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-scripts\") pod \"keystone-bootstrap-mzbcc\" (UID: \"c00825d9-a4c0-40d9-b77c-e1661747f42d\") " pod="openstack/keystone-bootstrap-mzbcc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.136043 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-config-data\") pod \"keystone-bootstrap-mzbcc\" (UID: \"c00825d9-a4c0-40d9-b77c-e1661747f42d\") " pod="openstack/keystone-bootstrap-mzbcc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.138940 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-c9b9j"] Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.140349 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-c9b9j" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.157793 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.158216 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.158444 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.158644 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ck89z" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.158692 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.162393 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-c9b9j"] Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.168443 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hgdh\" (UniqueName: \"kubernetes.io/projected/c00825d9-a4c0-40d9-b77c-e1661747f42d-kube-api-access-7hgdh\") pod \"keystone-bootstrap-mzbcc\" (UID: \"c00825d9-a4c0-40d9-b77c-e1661747f42d\") " pod="openstack/keystone-bootstrap-mzbcc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.201938 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/666d76d2-bd8a-4533-86ef-d87c77ed4912-horizon-secret-key\") pod \"horizon-6d9d97cc85-t5jdm\" (UID: \"666d76d2-bd8a-4533-86ef-d87c77ed4912\") " pod="openstack/horizon-6d9d97cc85-t5jdm" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.202022 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/666d76d2-bd8a-4533-86ef-d87c77ed4912-config-data\") pod \"horizon-6d9d97cc85-t5jdm\" (UID: \"666d76d2-bd8a-4533-86ef-d87c77ed4912\") " pod="openstack/horizon-6d9d97cc85-t5jdm" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.202056 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/666d76d2-bd8a-4533-86ef-d87c77ed4912-scripts\") pod \"horizon-6d9d97cc85-t5jdm\" (UID: \"666d76d2-bd8a-4533-86ef-d87c77ed4912\") " pod="openstack/horizon-6d9d97cc85-t5jdm" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.202079 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk9r7\" (UniqueName: \"kubernetes.io/projected/666d76d2-bd8a-4533-86ef-d87c77ed4912-kube-api-access-mk9r7\") pod \"horizon-6d9d97cc85-t5jdm\" (UID: \"666d76d2-bd8a-4533-86ef-d87c77ed4912\") " pod="openstack/horizon-6d9d97cc85-t5jdm" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.202135 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/666d76d2-bd8a-4533-86ef-d87c77ed4912-logs\") pod \"horizon-6d9d97cc85-t5jdm\" (UID: \"666d76d2-bd8a-4533-86ef-d87c77ed4912\") " pod="openstack/horizon-6d9d97cc85-t5jdm" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.223378 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-5tgzd"] Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.224522 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5tgzd" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.225663 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsznk\" (UniqueName: \"kubernetes.io/projected/ce75899e-c98a-4483-940a-f5b67166ced5-kube-api-access-tsznk\") pod \"dnsmasq-dns-5959f8865f-8bxvr\" (UID: \"ce75899e-c98a-4483-940a-f5b67166ced5\") " pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.226011 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.236834 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-db6f49d9f-4k7d7"] Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.238403 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-db6f49d9f-4k7d7" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.239691 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-7fgkr" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.240139 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.240247 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.251039 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-wxq9x"] Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.252081 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wxq9x" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.259966 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-5tgzd"] Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.270877 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-db6f49d9f-4k7d7"] Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.285647 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xmrn4" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.285887 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.285984 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wxq9x"] Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.302039 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-8bxvr"] Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.303734 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d99d0c-9a87-4d80-8105-5c86158f6770-scripts\") pod \"ceilometer-0\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " pod="openstack/ceilometer-0" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.303835 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/666d76d2-bd8a-4533-86ef-d87c77ed4912-config-data\") pod \"horizon-6d9d97cc85-t5jdm\" (UID: \"666d76d2-bd8a-4533-86ef-d87c77ed4912\") " pod="openstack/horizon-6d9d97cc85-t5jdm" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.303867 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2d99d0c-9a87-4d80-8105-5c86158f6770-log-httpd\") pod \"ceilometer-0\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " pod="openstack/ceilometer-0" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.303890 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/666d76d2-bd8a-4533-86ef-d87c77ed4912-scripts\") pod \"horizon-6d9d97cc85-t5jdm\" (UID: \"666d76d2-bd8a-4533-86ef-d87c77ed4912\") " pod="openstack/horizon-6d9d97cc85-t5jdm" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.303908 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d99d0c-9a87-4d80-8105-5c86158f6770-config-data\") pod \"ceilometer-0\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " pod="openstack/ceilometer-0" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.303931 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk9r7\" (UniqueName: \"kubernetes.io/projected/666d76d2-bd8a-4533-86ef-d87c77ed4912-kube-api-access-mk9r7\") pod \"horizon-6d9d97cc85-t5jdm\" (UID: \"666d76d2-bd8a-4533-86ef-d87c77ed4912\") " pod="openstack/horizon-6d9d97cc85-t5jdm" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.303959 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1688b2e2-1aaf-49e0-8414-0f12bb079aba-scripts\") pod \"cinder-db-sync-c9b9j\" (UID: \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\") " pod="openstack/cinder-db-sync-c9b9j" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.303985 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cdd5\" (UniqueName: \"kubernetes.io/projected/1688b2e2-1aaf-49e0-8414-0f12bb079aba-kube-api-access-4cdd5\") pod \"cinder-db-sync-c9b9j\" (UID: \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\") " pod="openstack/cinder-db-sync-c9b9j" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.304002 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2d99d0c-9a87-4d80-8105-5c86158f6770-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " pod="openstack/ceilometer-0" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.304032 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2d99d0c-9a87-4d80-8105-5c86158f6770-run-httpd\") pod \"ceilometer-0\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " pod="openstack/ceilometer-0" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.304054 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/666d76d2-bd8a-4533-86ef-d87c77ed4912-logs\") pod \"horizon-6d9d97cc85-t5jdm\" (UID: \"666d76d2-bd8a-4533-86ef-d87c77ed4912\") " pod="openstack/horizon-6d9d97cc85-t5jdm" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.304078 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1688b2e2-1aaf-49e0-8414-0f12bb079aba-config-data\") pod \"cinder-db-sync-c9b9j\" (UID: \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\") " pod="openstack/cinder-db-sync-c9b9j" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.304103 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1688b2e2-1aaf-49e0-8414-0f12bb079aba-combined-ca-bundle\") pod \"cinder-db-sync-c9b9j\" (UID: \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\") " pod="openstack/cinder-db-sync-c9b9j" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.304121 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1688b2e2-1aaf-49e0-8414-0f12bb079aba-db-sync-config-data\") pod \"cinder-db-sync-c9b9j\" (UID: \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\") " pod="openstack/cinder-db-sync-c9b9j" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.304157 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1688b2e2-1aaf-49e0-8414-0f12bb079aba-etc-machine-id\") pod \"cinder-db-sync-c9b9j\" (UID: \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\") " pod="openstack/cinder-db-sync-c9b9j" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.304179 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhzqt\" (UniqueName: \"kubernetes.io/projected/e2d99d0c-9a87-4d80-8105-5c86158f6770-kube-api-access-vhzqt\") pod \"ceilometer-0\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " pod="openstack/ceilometer-0" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.304197 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d99d0c-9a87-4d80-8105-5c86158f6770-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " pod="openstack/ceilometer-0" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.304236 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/666d76d2-bd8a-4533-86ef-d87c77ed4912-horizon-secret-key\") pod \"horizon-6d9d97cc85-t5jdm\" (UID: \"666d76d2-bd8a-4533-86ef-d87c77ed4912\") " pod="openstack/horizon-6d9d97cc85-t5jdm" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.312142 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/666d76d2-bd8a-4533-86ef-d87c77ed4912-logs\") pod \"horizon-6d9d97cc85-t5jdm\" (UID: \"666d76d2-bd8a-4533-86ef-d87c77ed4912\") " pod="openstack/horizon-6d9d97cc85-t5jdm" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.313047 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/666d76d2-bd8a-4533-86ef-d87c77ed4912-config-data\") pod \"horizon-6d9d97cc85-t5jdm\" (UID: \"666d76d2-bd8a-4533-86ef-d87c77ed4912\") " pod="openstack/horizon-6d9d97cc85-t5jdm" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.313610 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/666d76d2-bd8a-4533-86ef-d87c77ed4912-horizon-secret-key\") pod \"horizon-6d9d97cc85-t5jdm\" (UID: \"666d76d2-bd8a-4533-86ef-d87c77ed4912\") " pod="openstack/horizon-6d9d97cc85-t5jdm" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.314448 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/666d76d2-bd8a-4533-86ef-d87c77ed4912-scripts\") pod \"horizon-6d9d97cc85-t5jdm\" (UID: \"666d76d2-bd8a-4533-86ef-d87c77ed4912\") " pod="openstack/horizon-6d9d97cc85-t5jdm" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.348580 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-7sqlf"] Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.358176 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7sqlf" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.365894 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-7sqlf"] Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.374878 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.375158 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nq8p7" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.375270 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.375352 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-qvmzc"] Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.376745 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.380531 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk9r7\" (UniqueName: \"kubernetes.io/projected/666d76d2-bd8a-4533-86ef-d87c77ed4912-kube-api-access-mk9r7\") pod \"horizon-6d9d97cc85-t5jdm\" (UID: \"666d76d2-bd8a-4533-86ef-d87c77ed4912\") " pod="openstack/horizon-6d9d97cc85-t5jdm" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.388080 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d9d97cc85-t5jdm" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421129 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d99d0c-9a87-4d80-8105-5c86158f6770-scripts\") pod \"ceilometer-0\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " pod="openstack/ceilometer-0" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421192 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a22e1d3e-80dc-44d0-b199-588397ea177e-horizon-secret-key\") pod \"horizon-db6f49d9f-4k7d7\" (UID: \"a22e1d3e-80dc-44d0-b199-588397ea177e\") " pod="openstack/horizon-db6f49d9f-4k7d7" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421220 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2d99d0c-9a87-4d80-8105-5c86158f6770-log-httpd\") pod \"ceilometer-0\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " pod="openstack/ceilometer-0" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421249 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d99d0c-9a87-4d80-8105-5c86158f6770-config-data\") pod \"ceilometer-0\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " pod="openstack/ceilometer-0" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421270 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a22e1d3e-80dc-44d0-b199-588397ea177e-config-data\") pod \"horizon-db6f49d9f-4k7d7\" (UID: \"a22e1d3e-80dc-44d0-b199-588397ea177e\") " pod="openstack/horizon-db6f49d9f-4k7d7" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421295 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5e88418-60bd-44ee-8272-245ee92460c6-logs\") pod \"placement-db-sync-5tgzd\" (UID: \"c5e88418-60bd-44ee-8272-245ee92460c6\") " pod="openstack/placement-db-sync-5tgzd" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421321 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1688b2e2-1aaf-49e0-8414-0f12bb079aba-scripts\") pod \"cinder-db-sync-c9b9j\" (UID: \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\") " pod="openstack/cinder-db-sync-c9b9j" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421345 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5c7j\" (UniqueName: \"kubernetes.io/projected/57f91015-35f5-486c-a88c-0a90f76724e5-kube-api-access-v5c7j\") pod \"barbican-db-sync-wxq9x\" (UID: \"57f91015-35f5-486c-a88c-0a90f76724e5\") " pod="openstack/barbican-db-sync-wxq9x" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421373 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cdd5\" (UniqueName: \"kubernetes.io/projected/1688b2e2-1aaf-49e0-8414-0f12bb079aba-kube-api-access-4cdd5\") pod \"cinder-db-sync-c9b9j\" (UID: \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\") " pod="openstack/cinder-db-sync-c9b9j" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421397 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2d99d0c-9a87-4d80-8105-5c86158f6770-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " pod="openstack/ceilometer-0" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421417 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a22e1d3e-80dc-44d0-b199-588397ea177e-scripts\") pod \"horizon-db6f49d9f-4k7d7\" (UID: \"a22e1d3e-80dc-44d0-b199-588397ea177e\") " pod="openstack/horizon-db6f49d9f-4k7d7" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421447 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n84l9\" (UniqueName: \"kubernetes.io/projected/a22e1d3e-80dc-44d0-b199-588397ea177e-kube-api-access-n84l9\") pod \"horizon-db6f49d9f-4k7d7\" (UID: \"a22e1d3e-80dc-44d0-b199-588397ea177e\") " pod="openstack/horizon-db6f49d9f-4k7d7" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421474 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f91015-35f5-486c-a88c-0a90f76724e5-combined-ca-bundle\") pod \"barbican-db-sync-wxq9x\" (UID: \"57f91015-35f5-486c-a88c-0a90f76724e5\") " pod="openstack/barbican-db-sync-wxq9x" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421497 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2d99d0c-9a87-4d80-8105-5c86158f6770-run-httpd\") pod \"ceilometer-0\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " pod="openstack/ceilometer-0" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421521 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57f91015-35f5-486c-a88c-0a90f76724e5-db-sync-config-data\") pod \"barbican-db-sync-wxq9x\" (UID: \"57f91015-35f5-486c-a88c-0a90f76724e5\") " pod="openstack/barbican-db-sync-wxq9x" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421545 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1688b2e2-1aaf-49e0-8414-0f12bb079aba-config-data\") pod \"cinder-db-sync-c9b9j\" (UID: \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\") " pod="openstack/cinder-db-sync-c9b9j" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421564 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1688b2e2-1aaf-49e0-8414-0f12bb079aba-combined-ca-bundle\") pod \"cinder-db-sync-c9b9j\" (UID: \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\") " pod="openstack/cinder-db-sync-c9b9j" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421581 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5e88418-60bd-44ee-8272-245ee92460c6-scripts\") pod \"placement-db-sync-5tgzd\" (UID: \"c5e88418-60bd-44ee-8272-245ee92460c6\") " pod="openstack/placement-db-sync-5tgzd" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421600 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1688b2e2-1aaf-49e0-8414-0f12bb079aba-db-sync-config-data\") pod \"cinder-db-sync-c9b9j\" (UID: \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\") " pod="openstack/cinder-db-sync-c9b9j" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421619 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e88418-60bd-44ee-8272-245ee92460c6-config-data\") pod \"placement-db-sync-5tgzd\" (UID: \"c5e88418-60bd-44ee-8272-245ee92460c6\") " pod="openstack/placement-db-sync-5tgzd" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421638 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tll8\" (UniqueName: \"kubernetes.io/projected/c5e88418-60bd-44ee-8272-245ee92460c6-kube-api-access-8tll8\") pod \"placement-db-sync-5tgzd\" (UID: \"c5e88418-60bd-44ee-8272-245ee92460c6\") " pod="openstack/placement-db-sync-5tgzd" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421653 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a22e1d3e-80dc-44d0-b199-588397ea177e-logs\") pod \"horizon-db6f49d9f-4k7d7\" (UID: \"a22e1d3e-80dc-44d0-b199-588397ea177e\") " pod="openstack/horizon-db6f49d9f-4k7d7" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421675 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e88418-60bd-44ee-8272-245ee92460c6-combined-ca-bundle\") pod \"placement-db-sync-5tgzd\" (UID: \"c5e88418-60bd-44ee-8272-245ee92460c6\") " pod="openstack/placement-db-sync-5tgzd" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421691 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1688b2e2-1aaf-49e0-8414-0f12bb079aba-etc-machine-id\") pod \"cinder-db-sync-c9b9j\" (UID: \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\") " pod="openstack/cinder-db-sync-c9b9j" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421708 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhzqt\" (UniqueName: \"kubernetes.io/projected/e2d99d0c-9a87-4d80-8105-5c86158f6770-kube-api-access-vhzqt\") pod \"ceilometer-0\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " pod="openstack/ceilometer-0" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.421726 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d99d0c-9a87-4d80-8105-5c86158f6770-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " pod="openstack/ceilometer-0" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.423003 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2d99d0c-9a87-4d80-8105-5c86158f6770-log-httpd\") pod \"ceilometer-0\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " pod="openstack/ceilometer-0" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.433791 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d99d0c-9a87-4d80-8105-5c86158f6770-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " pod="openstack/ceilometer-0" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.433870 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1688b2e2-1aaf-49e0-8414-0f12bb079aba-etc-machine-id\") pod \"cinder-db-sync-c9b9j\" (UID: \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\") " pod="openstack/cinder-db-sync-c9b9j" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.434117 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2d99d0c-9a87-4d80-8105-5c86158f6770-run-httpd\") pod \"ceilometer-0\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " pod="openstack/ceilometer-0" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.440287 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1688b2e2-1aaf-49e0-8414-0f12bb079aba-config-data\") pod \"cinder-db-sync-c9b9j\" (UID: \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\") " pod="openstack/cinder-db-sync-c9b9j" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.441116 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2d99d0c-9a87-4d80-8105-5c86158f6770-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " pod="openstack/ceilometer-0" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.442324 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mzbcc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.444093 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d99d0c-9a87-4d80-8105-5c86158f6770-config-data\") pod \"ceilometer-0\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " pod="openstack/ceilometer-0" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.444215 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1688b2e2-1aaf-49e0-8414-0f12bb079aba-db-sync-config-data\") pod \"cinder-db-sync-c9b9j\" (UID: \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\") " pod="openstack/cinder-db-sync-c9b9j" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.446434 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1688b2e2-1aaf-49e0-8414-0f12bb079aba-combined-ca-bundle\") pod \"cinder-db-sync-c9b9j\" (UID: \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\") " pod="openstack/cinder-db-sync-c9b9j" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.460341 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cdd5\" (UniqueName: \"kubernetes.io/projected/1688b2e2-1aaf-49e0-8414-0f12bb079aba-kube-api-access-4cdd5\") pod \"cinder-db-sync-c9b9j\" (UID: \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\") " pod="openstack/cinder-db-sync-c9b9j" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.460521 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1688b2e2-1aaf-49e0-8414-0f12bb079aba-scripts\") pod \"cinder-db-sync-c9b9j\" (UID: \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\") " pod="openstack/cinder-db-sync-c9b9j" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.468325 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d99d0c-9a87-4d80-8105-5c86158f6770-scripts\") pod \"ceilometer-0\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " pod="openstack/ceilometer-0" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.473568 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhzqt\" (UniqueName: \"kubernetes.io/projected/e2d99d0c-9a87-4d80-8105-5c86158f6770-kube-api-access-vhzqt\") pod \"ceilometer-0\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " pod="openstack/ceilometer-0" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.496321 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-qvmzc"] Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.526646 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5e88418-60bd-44ee-8272-245ee92460c6-scripts\") pod \"placement-db-sync-5tgzd\" (UID: \"c5e88418-60bd-44ee-8272-245ee92460c6\") " pod="openstack/placement-db-sync-5tgzd" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.526705 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-qvmzc\" (UID: \"aab6e228-490d-463f-952e-3723bc4b5fad\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.526722 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1736698-f0bd-493f-a03e-dc1957763f1a-combined-ca-bundle\") pod \"neutron-db-sync-7sqlf\" (UID: \"f1736698-f0bd-493f-a03e-dc1957763f1a\") " pod="openstack/neutron-db-sync-7sqlf" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.528248 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-qvmzc\" (UID: \"aab6e228-490d-463f-952e-3723bc4b5fad\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.528279 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e88418-60bd-44ee-8272-245ee92460c6-config-data\") pod \"placement-db-sync-5tgzd\" (UID: \"c5e88418-60bd-44ee-8272-245ee92460c6\") " pod="openstack/placement-db-sync-5tgzd" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.528319 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tll8\" (UniqueName: \"kubernetes.io/projected/c5e88418-60bd-44ee-8272-245ee92460c6-kube-api-access-8tll8\") pod \"placement-db-sync-5tgzd\" (UID: \"c5e88418-60bd-44ee-8272-245ee92460c6\") " pod="openstack/placement-db-sync-5tgzd" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.528337 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a22e1d3e-80dc-44d0-b199-588397ea177e-logs\") pod \"horizon-db6f49d9f-4k7d7\" (UID: \"a22e1d3e-80dc-44d0-b199-588397ea177e\") " pod="openstack/horizon-db6f49d9f-4k7d7" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.528362 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-config\") pod \"dnsmasq-dns-58dd9ff6bc-qvmzc\" (UID: \"aab6e228-490d-463f-952e-3723bc4b5fad\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.528394 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e88418-60bd-44ee-8272-245ee92460c6-combined-ca-bundle\") pod \"placement-db-sync-5tgzd\" (UID: \"c5e88418-60bd-44ee-8272-245ee92460c6\") " pod="openstack/placement-db-sync-5tgzd" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.528427 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-qvmzc\" (UID: \"aab6e228-490d-463f-952e-3723bc4b5fad\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.528490 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a22e1d3e-80dc-44d0-b199-588397ea177e-horizon-secret-key\") pod \"horizon-db6f49d9f-4k7d7\" (UID: \"a22e1d3e-80dc-44d0-b199-588397ea177e\") " pod="openstack/horizon-db6f49d9f-4k7d7" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.528509 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfc9f\" (UniqueName: \"kubernetes.io/projected/aab6e228-490d-463f-952e-3723bc4b5fad-kube-api-access-vfc9f\") pod \"dnsmasq-dns-58dd9ff6bc-qvmzc\" (UID: \"aab6e228-490d-463f-952e-3723bc4b5fad\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.528546 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkmkc\" (UniqueName: \"kubernetes.io/projected/f1736698-f0bd-493f-a03e-dc1957763f1a-kube-api-access-zkmkc\") pod \"neutron-db-sync-7sqlf\" (UID: \"f1736698-f0bd-493f-a03e-dc1957763f1a\") " pod="openstack/neutron-db-sync-7sqlf" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.528566 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a22e1d3e-80dc-44d0-b199-588397ea177e-config-data\") pod \"horizon-db6f49d9f-4k7d7\" (UID: \"a22e1d3e-80dc-44d0-b199-588397ea177e\") " pod="openstack/horizon-db6f49d9f-4k7d7" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.528601 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5e88418-60bd-44ee-8272-245ee92460c6-logs\") pod \"placement-db-sync-5tgzd\" (UID: \"c5e88418-60bd-44ee-8272-245ee92460c6\") " pod="openstack/placement-db-sync-5tgzd" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.528637 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-qvmzc\" (UID: \"aab6e228-490d-463f-952e-3723bc4b5fad\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.528663 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5c7j\" (UniqueName: \"kubernetes.io/projected/57f91015-35f5-486c-a88c-0a90f76724e5-kube-api-access-v5c7j\") pod \"barbican-db-sync-wxq9x\" (UID: \"57f91015-35f5-486c-a88c-0a90f76724e5\") " pod="openstack/barbican-db-sync-wxq9x" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.528714 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a22e1d3e-80dc-44d0-b199-588397ea177e-scripts\") pod \"horizon-db6f49d9f-4k7d7\" (UID: \"a22e1d3e-80dc-44d0-b199-588397ea177e\") " pod="openstack/horizon-db6f49d9f-4k7d7" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.528741 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n84l9\" (UniqueName: \"kubernetes.io/projected/a22e1d3e-80dc-44d0-b199-588397ea177e-kube-api-access-n84l9\") pod \"horizon-db6f49d9f-4k7d7\" (UID: \"a22e1d3e-80dc-44d0-b199-588397ea177e\") " pod="openstack/horizon-db6f49d9f-4k7d7" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.528755 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f1736698-f0bd-493f-a03e-dc1957763f1a-config\") pod \"neutron-db-sync-7sqlf\" (UID: \"f1736698-f0bd-493f-a03e-dc1957763f1a\") " pod="openstack/neutron-db-sync-7sqlf" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.528787 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f91015-35f5-486c-a88c-0a90f76724e5-combined-ca-bundle\") pod \"barbican-db-sync-wxq9x\" (UID: \"57f91015-35f5-486c-a88c-0a90f76724e5\") " pod="openstack/barbican-db-sync-wxq9x" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.528837 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57f91015-35f5-486c-a88c-0a90f76724e5-db-sync-config-data\") pod \"barbican-db-sync-wxq9x\" (UID: \"57f91015-35f5-486c-a88c-0a90f76724e5\") " pod="openstack/barbican-db-sync-wxq9x" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.533040 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a22e1d3e-80dc-44d0-b199-588397ea177e-logs\") pod \"horizon-db6f49d9f-4k7d7\" (UID: \"a22e1d3e-80dc-44d0-b199-588397ea177e\") " pod="openstack/horizon-db6f49d9f-4k7d7" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.535374 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a22e1d3e-80dc-44d0-b199-588397ea177e-config-data\") pod \"horizon-db6f49d9f-4k7d7\" (UID: \"a22e1d3e-80dc-44d0-b199-588397ea177e\") " pod="openstack/horizon-db6f49d9f-4k7d7" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.535650 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5e88418-60bd-44ee-8272-245ee92460c6-logs\") pod \"placement-db-sync-5tgzd\" (UID: \"c5e88418-60bd-44ee-8272-245ee92460c6\") " pod="openstack/placement-db-sync-5tgzd" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.536443 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a22e1d3e-80dc-44d0-b199-588397ea177e-scripts\") pod \"horizon-db6f49d9f-4k7d7\" (UID: \"a22e1d3e-80dc-44d0-b199-588397ea177e\") " pod="openstack/horizon-db6f49d9f-4k7d7" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.537730 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57f91015-35f5-486c-a88c-0a90f76724e5-db-sync-config-data\") pod \"barbican-db-sync-wxq9x\" (UID: \"57f91015-35f5-486c-a88c-0a90f76724e5\") " pod="openstack/barbican-db-sync-wxq9x" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.537998 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a22e1d3e-80dc-44d0-b199-588397ea177e-horizon-secret-key\") pod \"horizon-db6f49d9f-4k7d7\" (UID: \"a22e1d3e-80dc-44d0-b199-588397ea177e\") " pod="openstack/horizon-db6f49d9f-4k7d7" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.538168 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5e88418-60bd-44ee-8272-245ee92460c6-scripts\") pod \"placement-db-sync-5tgzd\" (UID: \"c5e88418-60bd-44ee-8272-245ee92460c6\") " pod="openstack/placement-db-sync-5tgzd" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.536610 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e88418-60bd-44ee-8272-245ee92460c6-combined-ca-bundle\") pod \"placement-db-sync-5tgzd\" (UID: \"c5e88418-60bd-44ee-8272-245ee92460c6\") " pod="openstack/placement-db-sync-5tgzd" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.540247 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e88418-60bd-44ee-8272-245ee92460c6-config-data\") pod \"placement-db-sync-5tgzd\" (UID: \"c5e88418-60bd-44ee-8272-245ee92460c6\") " pod="openstack/placement-db-sync-5tgzd" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.542015 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f91015-35f5-486c-a88c-0a90f76724e5-combined-ca-bundle\") pod \"barbican-db-sync-wxq9x\" (UID: \"57f91015-35f5-486c-a88c-0a90f76724e5\") " pod="openstack/barbican-db-sync-wxq9x" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.557243 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tll8\" (UniqueName: \"kubernetes.io/projected/c5e88418-60bd-44ee-8272-245ee92460c6-kube-api-access-8tll8\") pod \"placement-db-sync-5tgzd\" (UID: \"c5e88418-60bd-44ee-8272-245ee92460c6\") " pod="openstack/placement-db-sync-5tgzd" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.562221 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n84l9\" (UniqueName: \"kubernetes.io/projected/a22e1d3e-80dc-44d0-b199-588397ea177e-kube-api-access-n84l9\") pod \"horizon-db6f49d9f-4k7d7\" (UID: \"a22e1d3e-80dc-44d0-b199-588397ea177e\") " pod="openstack/horizon-db6f49d9f-4k7d7" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.562283 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" event={"ID":"080ff429-38a5-459f-b650-9090593c1da1","Type":"ContainerStarted","Data":"22bcee49c99849054b19a838ade3a66debb4c8356bb237d122436afecfeb614c"} Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.562340 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.563888 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5c7j\" (UniqueName: \"kubernetes.io/projected/57f91015-35f5-486c-a88c-0a90f76724e5-kube-api-access-v5c7j\") pod \"barbican-db-sync-wxq9x\" (UID: \"57f91015-35f5-486c-a88c-0a90f76724e5\") " pod="openstack/barbican-db-sync-wxq9x" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.611880 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" podStartSLOduration=3.6118608549999998 podStartE2EDuration="3.611860855s" podCreationTimestamp="2026-02-28 04:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:53:11.581327832 +0000 UTC m=+1180.251453742" watchObservedRunningTime="2026-02-28 04:53:11.611860855 +0000 UTC m=+1180.281986765" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.618197 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.633796 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-qvmzc\" (UID: \"aab6e228-490d-463f-952e-3723bc4b5fad\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.634473 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f1736698-f0bd-493f-a03e-dc1957763f1a-config\") pod \"neutron-db-sync-7sqlf\" (UID: \"f1736698-f0bd-493f-a03e-dc1957763f1a\") " pod="openstack/neutron-db-sync-7sqlf" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.634567 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-qvmzc\" (UID: \"aab6e228-490d-463f-952e-3723bc4b5fad\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.634584 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1736698-f0bd-493f-a03e-dc1957763f1a-combined-ca-bundle\") pod \"neutron-db-sync-7sqlf\" (UID: \"f1736698-f0bd-493f-a03e-dc1957763f1a\") " pod="openstack/neutron-db-sync-7sqlf" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.634601 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-qvmzc\" (UID: \"aab6e228-490d-463f-952e-3723bc4b5fad\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.634628 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-config\") pod \"dnsmasq-dns-58dd9ff6bc-qvmzc\" (UID: \"aab6e228-490d-463f-952e-3723bc4b5fad\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.634655 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-qvmzc\" (UID: \"aab6e228-490d-463f-952e-3723bc4b5fad\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.634717 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfc9f\" (UniqueName: \"kubernetes.io/projected/aab6e228-490d-463f-952e-3723bc4b5fad-kube-api-access-vfc9f\") pod \"dnsmasq-dns-58dd9ff6bc-qvmzc\" (UID: \"aab6e228-490d-463f-952e-3723bc4b5fad\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.634738 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkmkc\" (UniqueName: \"kubernetes.io/projected/f1736698-f0bd-493f-a03e-dc1957763f1a-kube-api-access-zkmkc\") pod \"neutron-db-sync-7sqlf\" (UID: \"f1736698-f0bd-493f-a03e-dc1957763f1a\") " pod="openstack/neutron-db-sync-7sqlf" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.635878 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-dns-svc\") pod \"dnsmasq-dns-58dd9ff6bc-qvmzc\" (UID: \"aab6e228-490d-463f-952e-3723bc4b5fad\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.640534 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-dns-swift-storage-0\") pod \"dnsmasq-dns-58dd9ff6bc-qvmzc\" (UID: \"aab6e228-490d-463f-952e-3723bc4b5fad\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.641455 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-ovsdbserver-sb\") pod \"dnsmasq-dns-58dd9ff6bc-qvmzc\" (UID: \"aab6e228-490d-463f-952e-3723bc4b5fad\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.642309 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/f1736698-f0bd-493f-a03e-dc1957763f1a-config\") pod \"neutron-db-sync-7sqlf\" (UID: \"f1736698-f0bd-493f-a03e-dc1957763f1a\") " pod="openstack/neutron-db-sync-7sqlf" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.643315 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-config\") pod \"dnsmasq-dns-58dd9ff6bc-qvmzc\" (UID: \"aab6e228-490d-463f-952e-3723bc4b5fad\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.643657 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-ovsdbserver-nb\") pod \"dnsmasq-dns-58dd9ff6bc-qvmzc\" (UID: \"aab6e228-490d-463f-952e-3723bc4b5fad\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.652368 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1736698-f0bd-493f-a03e-dc1957763f1a-combined-ca-bundle\") pod \"neutron-db-sync-7sqlf\" (UID: \"f1736698-f0bd-493f-a03e-dc1957763f1a\") " pod="openstack/neutron-db-sync-7sqlf" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.657652 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkmkc\" (UniqueName: \"kubernetes.io/projected/f1736698-f0bd-493f-a03e-dc1957763f1a-kube-api-access-zkmkc\") pod \"neutron-db-sync-7sqlf\" (UID: \"f1736698-f0bd-493f-a03e-dc1957763f1a\") " pod="openstack/neutron-db-sync-7sqlf" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.661494 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfc9f\" (UniqueName: \"kubernetes.io/projected/aab6e228-490d-463f-952e-3723bc4b5fad-kube-api-access-vfc9f\") pod \"dnsmasq-dns-58dd9ff6bc-qvmzc\" (UID: \"aab6e228-490d-463f-952e-3723bc4b5fad\") " pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.666942 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-c9b9j" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.716257 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5tgzd" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.775938 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-db6f49d9f-4k7d7" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.828650 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wxq9x" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.865433 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7sqlf" Feb 28 04:53:11 crc kubenswrapper[5014]: I0228 04:53:11.871427 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.008798 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-8bxvr"] Feb 28 04:53:12 crc kubenswrapper[5014]: W0228 04:53:12.085699 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce75899e_c98a_4483_940a_f5b67166ced5.slice/crio-cc6e34cdb753d2f0a10895e852e20cf7a6a9c30bc2e078a3cc4ce4f1f5b53baf WatchSource:0}: Error finding container cc6e34cdb753d2f0a10895e852e20cf7a6a9c30bc2e078a3cc4ce4f1f5b53baf: Status 404 returned error can't find the container with id cc6e34cdb753d2f0a10895e852e20cf7a6a9c30bc2e078a3cc4ce4f1f5b53baf Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.211726 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6d9d97cc85-t5jdm"] Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.250419 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tz5jx" Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.363265 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d232d598-8b65-47f6-a5dc-9d77d37d9b80-config-data\") pod \"d232d598-8b65-47f6-a5dc-9d77d37d9b80\" (UID: \"d232d598-8b65-47f6-a5dc-9d77d37d9b80\") " Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.363341 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d232d598-8b65-47f6-a5dc-9d77d37d9b80-db-sync-config-data\") pod \"d232d598-8b65-47f6-a5dc-9d77d37d9b80\" (UID: \"d232d598-8b65-47f6-a5dc-9d77d37d9b80\") " Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.363375 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2j248\" (UniqueName: \"kubernetes.io/projected/d232d598-8b65-47f6-a5dc-9d77d37d9b80-kube-api-access-2j248\") pod \"d232d598-8b65-47f6-a5dc-9d77d37d9b80\" (UID: \"d232d598-8b65-47f6-a5dc-9d77d37d9b80\") " Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.363588 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d232d598-8b65-47f6-a5dc-9d77d37d9b80-combined-ca-bundle\") pod \"d232d598-8b65-47f6-a5dc-9d77d37d9b80\" (UID: \"d232d598-8b65-47f6-a5dc-9d77d37d9b80\") " Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.386988 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d232d598-8b65-47f6-a5dc-9d77d37d9b80-kube-api-access-2j248" (OuterVolumeSpecName: "kube-api-access-2j248") pod "d232d598-8b65-47f6-a5dc-9d77d37d9b80" (UID: "d232d598-8b65-47f6-a5dc-9d77d37d9b80"). InnerVolumeSpecName "kube-api-access-2j248". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.393105 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d232d598-8b65-47f6-a5dc-9d77d37d9b80-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d232d598-8b65-47f6-a5dc-9d77d37d9b80" (UID: "d232d598-8b65-47f6-a5dc-9d77d37d9b80"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.421172 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d232d598-8b65-47f6-a5dc-9d77d37d9b80-config-data" (OuterVolumeSpecName: "config-data") pod "d232d598-8b65-47f6-a5dc-9d77d37d9b80" (UID: "d232d598-8b65-47f6-a5dc-9d77d37d9b80"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.440751 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d232d598-8b65-47f6-a5dc-9d77d37d9b80-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d232d598-8b65-47f6-a5dc-9d77d37d9b80" (UID: "d232d598-8b65-47f6-a5dc-9d77d37d9b80"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.443836 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-mzbcc"] Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.465612 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d232d598-8b65-47f6-a5dc-9d77d37d9b80-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.465713 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d232d598-8b65-47f6-a5dc-9d77d37d9b80-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.465764 5014 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d232d598-8b65-47f6-a5dc-9d77d37d9b80-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.465827 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2j248\" (UniqueName: \"kubernetes.io/projected/d232d598-8b65-47f6-a5dc-9d77d37d9b80-kube-api-access-2j248\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.578296 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-tz5jx" Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.578302 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-tz5jx" event={"ID":"d232d598-8b65-47f6-a5dc-9d77d37d9b80","Type":"ContainerDied","Data":"050158c92a08f38e760f4c84eb7c0b9e74fafb8edcdedee3b95b5e369c0a8f41"} Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.578344 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="050158c92a08f38e760f4c84eb7c0b9e74fafb8edcdedee3b95b5e369c0a8f41" Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.579776 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d9d97cc85-t5jdm" event={"ID":"666d76d2-bd8a-4533-86ef-d87c77ed4912","Type":"ContainerStarted","Data":"9defa71ba224f01bf8368aaaeff8a4ae49bca8894f72098a9a1233122cd7e4c7"} Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.580830 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mzbcc" event={"ID":"c00825d9-a4c0-40d9-b77c-e1661747f42d","Type":"ContainerStarted","Data":"69eea2e5d56c68426b26e5bae43b07ccb9ff367b85ce49913d222600b9c1e2b1"} Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.582399 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" event={"ID":"ce75899e-c98a-4483-940a-f5b67166ced5","Type":"ContainerStarted","Data":"cd48aefdfe39d08e6e3457b5add12388a50cf836338b3ff40f39d367327253ac"} Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.582429 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" event={"ID":"ce75899e-c98a-4483-940a-f5b67166ced5","Type":"ContainerStarted","Data":"cc6e34cdb753d2f0a10895e852e20cf7a6a9c30bc2e078a3cc4ce4f1f5b53baf"} Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.582606 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" podUID="080ff429-38a5-459f-b650-9090593c1da1" containerName="dnsmasq-dns" containerID="cri-o://22bcee49c99849054b19a838ade3a66debb4c8356bb237d122436afecfeb614c" gracePeriod=10 Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.765460 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-db6f49d9f-4k7d7"] Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.786113 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-5tgzd"] Feb 28 04:53:12 crc kubenswrapper[5014]: W0228 04:53:12.834528 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod57f91015_35f5_486c_a88c_0a90f76724e5.slice/crio-39a866eb31f65b2b9453c1b418776eb2164d6d45a055c6c2f12d6813885225fc WatchSource:0}: Error finding container 39a866eb31f65b2b9453c1b418776eb2164d6d45a055c6c2f12d6813885225fc: Status 404 returned error can't find the container with id 39a866eb31f65b2b9453c1b418776eb2164d6d45a055c6c2f12d6813885225fc Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.850042 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-wxq9x"] Feb 28 04:53:12 crc kubenswrapper[5014]: W0228 04:53:12.868366 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2d99d0c_9a87_4d80_8105_5c86158f6770.slice/crio-5c29a9dcdf73310ca958d248b363a67133714e707d1c1dd02702d60166510deb WatchSource:0}: Error finding container 5c29a9dcdf73310ca958d248b363a67133714e707d1c1dd02702d60166510deb: Status 404 returned error can't find the container with id 5c29a9dcdf73310ca958d248b363a67133714e707d1c1dd02702d60166510deb Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.884267 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-7sqlf"] Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.911948 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:53:12 crc kubenswrapper[5014]: W0228 04:53:12.934226 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf1736698_f0bd_493f_a03e_dc1957763f1a.slice/crio-9c318295dbc17d749f58aa83e052a60937df15e71491d3becbe60491b77ea5b7 WatchSource:0}: Error finding container 9c318295dbc17d749f58aa83e052a60937df15e71491d3becbe60491b77ea5b7: Status 404 returned error can't find the container with id 9c318295dbc17d749f58aa83e052a60937df15e71491d3becbe60491b77ea5b7 Feb 28 04:53:12 crc kubenswrapper[5014]: W0228 04:53:12.937914 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1688b2e2_1aaf_49e0_8414_0f12bb079aba.slice/crio-a1ce457678cf30e76262305beb9d0a55f4f947d960f49e5ae53697abdbf26055 WatchSource:0}: Error finding container a1ce457678cf30e76262305beb9d0a55f4f947d960f49e5ae53697abdbf26055: Status 404 returned error can't find the container with id a1ce457678cf30e76262305beb9d0a55f4f947d960f49e5ae53697abdbf26055 Feb 28 04:53:12 crc kubenswrapper[5014]: I0228 04:53:12.951342 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-c9b9j"] Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.007412 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-qvmzc"] Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.042035 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-qvmzc"] Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.042370 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.071462 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-sv6qs"] Feb 28 04:53:13 crc kubenswrapper[5014]: E0228 04:53:13.071847 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d232d598-8b65-47f6-a5dc-9d77d37d9b80" containerName="glance-db-sync" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.071864 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="d232d598-8b65-47f6-a5dc-9d77d37d9b80" containerName="glance-db-sync" Feb 28 04:53:13 crc kubenswrapper[5014]: E0228 04:53:13.071886 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce75899e-c98a-4483-940a-f5b67166ced5" containerName="init" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.071894 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce75899e-c98a-4483-940a-f5b67166ced5" containerName="init" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.072045 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="d232d598-8b65-47f6-a5dc-9d77d37d9b80" containerName="glance-db-sync" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.072069 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce75899e-c98a-4483-940a-f5b67166ced5" containerName="init" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.075149 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.109684 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-sv6qs"] Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.194506 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-config\") pod \"ce75899e-c98a-4483-940a-f5b67166ced5\" (UID: \"ce75899e-c98a-4483-940a-f5b67166ced5\") " Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.194553 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-dns-swift-storage-0\") pod \"ce75899e-c98a-4483-940a-f5b67166ced5\" (UID: \"ce75899e-c98a-4483-940a-f5b67166ced5\") " Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.194580 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-ovsdbserver-sb\") pod \"ce75899e-c98a-4483-940a-f5b67166ced5\" (UID: \"ce75899e-c98a-4483-940a-f5b67166ced5\") " Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.194608 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsznk\" (UniqueName: \"kubernetes.io/projected/ce75899e-c98a-4483-940a-f5b67166ced5-kube-api-access-tsznk\") pod \"ce75899e-c98a-4483-940a-f5b67166ced5\" (UID: \"ce75899e-c98a-4483-940a-f5b67166ced5\") " Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.194644 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-ovsdbserver-nb\") pod \"ce75899e-c98a-4483-940a-f5b67166ced5\" (UID: \"ce75899e-c98a-4483-940a-f5b67166ced5\") " Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.197183 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-dns-svc\") pod \"ce75899e-c98a-4483-940a-f5b67166ced5\" (UID: \"ce75899e-c98a-4483-940a-f5b67166ced5\") " Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.197419 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-sv6qs\" (UID: \"651265e8-74ac-412e-a823-a7e19b2c04b6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.197450 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-config\") pod \"dnsmasq-dns-785d8bcb8c-sv6qs\" (UID: \"651265e8-74ac-412e-a823-a7e19b2c04b6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.197501 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ld2l\" (UniqueName: \"kubernetes.io/projected/651265e8-74ac-412e-a823-a7e19b2c04b6-kube-api-access-4ld2l\") pod \"dnsmasq-dns-785d8bcb8c-sv6qs\" (UID: \"651265e8-74ac-412e-a823-a7e19b2c04b6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.197576 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-sv6qs\" (UID: \"651265e8-74ac-412e-a823-a7e19b2c04b6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.197624 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-sv6qs\" (UID: \"651265e8-74ac-412e-a823-a7e19b2c04b6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.197658 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-sv6qs\" (UID: \"651265e8-74ac-412e-a823-a7e19b2c04b6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.225965 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce75899e-c98a-4483-940a-f5b67166ced5-kube-api-access-tsznk" (OuterVolumeSpecName: "kube-api-access-tsznk") pod "ce75899e-c98a-4483-940a-f5b67166ced5" (UID: "ce75899e-c98a-4483-940a-f5b67166ced5"). InnerVolumeSpecName "kube-api-access-tsznk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.268490 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ce75899e-c98a-4483-940a-f5b67166ced5" (UID: "ce75899e-c98a-4483-940a-f5b67166ced5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.305153 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-sv6qs\" (UID: \"651265e8-74ac-412e-a823-a7e19b2c04b6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.305236 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-config\") pod \"dnsmasq-dns-785d8bcb8c-sv6qs\" (UID: \"651265e8-74ac-412e-a823-a7e19b2c04b6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.305335 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ld2l\" (UniqueName: \"kubernetes.io/projected/651265e8-74ac-412e-a823-a7e19b2c04b6-kube-api-access-4ld2l\") pod \"dnsmasq-dns-785d8bcb8c-sv6qs\" (UID: \"651265e8-74ac-412e-a823-a7e19b2c04b6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.305453 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-sv6qs\" (UID: \"651265e8-74ac-412e-a823-a7e19b2c04b6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.305516 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-sv6qs\" (UID: \"651265e8-74ac-412e-a823-a7e19b2c04b6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.305564 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-sv6qs\" (UID: \"651265e8-74ac-412e-a823-a7e19b2c04b6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.305665 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.305675 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsznk\" (UniqueName: \"kubernetes.io/projected/ce75899e-c98a-4483-940a-f5b67166ced5-kube-api-access-tsznk\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.310767 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-sv6qs\" (UID: \"651265e8-74ac-412e-a823-a7e19b2c04b6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.311421 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-sv6qs\" (UID: \"651265e8-74ac-412e-a823-a7e19b2c04b6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.314645 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ce75899e-c98a-4483-940a-f5b67166ced5" (UID: "ce75899e-c98a-4483-940a-f5b67166ced5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.315384 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-sv6qs\" (UID: \"651265e8-74ac-412e-a823-a7e19b2c04b6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.315558 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-sv6qs\" (UID: \"651265e8-74ac-412e-a823-a7e19b2c04b6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.328271 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ce75899e-c98a-4483-940a-f5b67166ced5" (UID: "ce75899e-c98a-4483-940a-f5b67166ced5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.342472 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-config\") pod \"dnsmasq-dns-785d8bcb8c-sv6qs\" (UID: \"651265e8-74ac-412e-a823-a7e19b2c04b6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.363380 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-config" (OuterVolumeSpecName: "config") pod "ce75899e-c98a-4483-940a-f5b67166ced5" (UID: "ce75899e-c98a-4483-940a-f5b67166ced5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.371189 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ld2l\" (UniqueName: \"kubernetes.io/projected/651265e8-74ac-412e-a823-a7e19b2c04b6-kube-api-access-4ld2l\") pod \"dnsmasq-dns-785d8bcb8c-sv6qs\" (UID: \"651265e8-74ac-412e-a823-a7e19b2c04b6\") " pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.407199 5014 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.407244 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.407256 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.414659 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ce75899e-c98a-4483-940a-f5b67166ced5" (UID: "ce75899e-c98a-4483-940a-f5b67166ced5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.508367 5014 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ce75899e-c98a-4483-940a-f5b67166ced5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.514158 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.527494 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.544172 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d9d97cc85-t5jdm"] Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.580172 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.612027 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-config\") pod \"080ff429-38a5-459f-b650-9090593c1da1\" (UID: \"080ff429-38a5-459f-b650-9090593c1da1\") " Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.612062 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-dns-svc\") pod \"080ff429-38a5-459f-b650-9090593c1da1\" (UID: \"080ff429-38a5-459f-b650-9090593c1da1\") " Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.612087 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-dns-swift-storage-0\") pod \"080ff429-38a5-459f-b650-9090593c1da1\" (UID: \"080ff429-38a5-459f-b650-9090593c1da1\") " Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.612120 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-ovsdbserver-nb\") pod \"080ff429-38a5-459f-b650-9090593c1da1\" (UID: \"080ff429-38a5-459f-b650-9090593c1da1\") " Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.612149 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfc5n\" (UniqueName: \"kubernetes.io/projected/080ff429-38a5-459f-b650-9090593c1da1-kube-api-access-lfc5n\") pod \"080ff429-38a5-459f-b650-9090593c1da1\" (UID: \"080ff429-38a5-459f-b650-9090593c1da1\") " Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.612230 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-ovsdbserver-sb\") pod \"080ff429-38a5-459f-b650-9090593c1da1\" (UID: \"080ff429-38a5-459f-b650-9090593c1da1\") " Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.631055 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/080ff429-38a5-459f-b650-9090593c1da1-kube-api-access-lfc5n" (OuterVolumeSpecName: "kube-api-access-lfc5n") pod "080ff429-38a5-459f-b650-9090593c1da1" (UID: "080ff429-38a5-459f-b650-9090593c1da1"). InnerVolumeSpecName "kube-api-access-lfc5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.635236 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5c865cc775-z8ptx"] Feb 28 04:53:13 crc kubenswrapper[5014]: E0228 04:53:13.636368 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080ff429-38a5-459f-b650-9090593c1da1" containerName="init" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.636392 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="080ff429-38a5-459f-b650-9090593c1da1" containerName="init" Feb 28 04:53:13 crc kubenswrapper[5014]: E0228 04:53:13.636425 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080ff429-38a5-459f-b650-9090593c1da1" containerName="dnsmasq-dns" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.636433 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="080ff429-38a5-459f-b650-9090593c1da1" containerName="dnsmasq-dns" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.636499 5014 generic.go:334] "Generic (PLEG): container finished" podID="080ff429-38a5-459f-b650-9090593c1da1" containerID="22bcee49c99849054b19a838ade3a66debb4c8356bb237d122436afecfeb614c" exitCode=0 Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.636692 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="080ff429-38a5-459f-b650-9090593c1da1" containerName="dnsmasq-dns" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.636700 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.639580 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" event={"ID":"080ff429-38a5-459f-b650-9090593c1da1","Type":"ContainerDied","Data":"22bcee49c99849054b19a838ade3a66debb4c8356bb237d122436afecfeb614c"} Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.639659 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-vqwf8" event={"ID":"080ff429-38a5-459f-b650-9090593c1da1","Type":"ContainerDied","Data":"34ab4169f2a25cae64c0619b4e80b04e6a60cdb8c3bd392488a88e1c0ddd6fa0"} Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.639712 5014 scope.go:117] "RemoveContainer" containerID="22bcee49c99849054b19a838ade3a66debb4c8356bb237d122436afecfeb614c" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.639799 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5tgzd" event={"ID":"c5e88418-60bd-44ee-8272-245ee92460c6","Type":"ContainerStarted","Data":"5f1a68560bd3185e7a017b7d8df92bb22b87836b9e153298d591142930d4d214"} Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.639978 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c865cc775-z8ptx" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.665089 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7sqlf" event={"ID":"f1736698-f0bd-493f-a03e-dc1957763f1a","Type":"ContainerStarted","Data":"9c318295dbc17d749f58aa83e052a60937df15e71491d3becbe60491b77ea5b7"} Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.685988 5014 generic.go:334] "Generic (PLEG): container finished" podID="ce75899e-c98a-4483-940a-f5b67166ced5" containerID="cd48aefdfe39d08e6e3457b5add12388a50cf836338b3ff40f39d367327253ac" exitCode=0 Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.686063 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" event={"ID":"ce75899e-c98a-4483-940a-f5b67166ced5","Type":"ContainerDied","Data":"cd48aefdfe39d08e6e3457b5add12388a50cf836338b3ff40f39d367327253ac"} Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.686088 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" event={"ID":"ce75899e-c98a-4483-940a-f5b67166ced5","Type":"ContainerDied","Data":"cc6e34cdb753d2f0a10895e852e20cf7a6a9c30bc2e078a3cc4ce4f1f5b53baf"} Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.686232 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5959f8865f-8bxvr" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.701574 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wxq9x" event={"ID":"57f91015-35f5-486c-a88c-0a90f76724e5","Type":"ContainerStarted","Data":"39a866eb31f65b2b9453c1b418776eb2164d6d45a055c6c2f12d6813885225fc"} Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.714281 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-horizon-secret-key\") pod \"horizon-5c865cc775-z8ptx\" (UID: \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\") " pod="openstack/horizon-5c865cc775-z8ptx" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.719528 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-scripts\") pod \"horizon-5c865cc775-z8ptx\" (UID: \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\") " pod="openstack/horizon-5c865cc775-z8ptx" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.719739 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fczr4\" (UniqueName: \"kubernetes.io/projected/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-kube-api-access-fczr4\") pod \"horizon-5c865cc775-z8ptx\" (UID: \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\") " pod="openstack/horizon-5c865cc775-z8ptx" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.719883 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-logs\") pod \"horizon-5c865cc775-z8ptx\" (UID: \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\") " pod="openstack/horizon-5c865cc775-z8ptx" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.719972 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-config-data\") pod \"horizon-5c865cc775-z8ptx\" (UID: \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\") " pod="openstack/horizon-5c865cc775-z8ptx" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.720082 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfc5n\" (UniqueName: \"kubernetes.io/projected/080ff429-38a5-459f-b650-9090593c1da1-kube-api-access-lfc5n\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.726147 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "080ff429-38a5-459f-b650-9090593c1da1" (UID: "080ff429-38a5-459f-b650-9090593c1da1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.764541 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "080ff429-38a5-459f-b650-9090593c1da1" (UID: "080ff429-38a5-459f-b650-9090593c1da1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.777386 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "080ff429-38a5-459f-b650-9090593c1da1" (UID: "080ff429-38a5-459f-b650-9090593c1da1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.777438 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c865cc775-z8ptx"] Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.778487 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-c9b9j" event={"ID":"1688b2e2-1aaf-49e0-8414-0f12bb079aba","Type":"ContainerStarted","Data":"a1ce457678cf30e76262305beb9d0a55f4f947d960f49e5ae53697abdbf26055"} Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.786046 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "080ff429-38a5-459f-b650-9090593c1da1" (UID: "080ff429-38a5-459f-b650-9090593c1da1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.789403 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-config" (OuterVolumeSpecName: "config") pod "080ff429-38a5-459f-b650-9090593c1da1" (UID: "080ff429-38a5-459f-b650-9090593c1da1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.793908 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.796322 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.797378 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2d99d0c-9a87-4d80-8105-5c86158f6770","Type":"ContainerStarted","Data":"5c29a9dcdf73310ca958d248b363a67133714e707d1c1dd02702d60166510deb"} Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.800569 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.804048 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mzbcc" event={"ID":"c00825d9-a4c0-40d9-b77c-e1661747f42d","Type":"ContainerStarted","Data":"1dfc961674ce32798797b2b57b0df42b1f6a3fdec1ff279e9a2b12082e3ccd9d"} Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.804559 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.804818 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qvwbm" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.814109 5014 scope.go:117] "RemoveContainer" containerID="8788d56d128881efde211114b2e42f10824c84a67c38a8f3a37237ee67632811" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.819035 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-db6f49d9f-4k7d7" event={"ID":"a22e1d3e-80dc-44d0-b199-588397ea177e","Type":"ContainerStarted","Data":"68851ea4829ae2eef053801406a032b909f3f876a46073a6436122b31fb88530"} Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.821050 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-horizon-secret-key\") pod \"horizon-5c865cc775-z8ptx\" (UID: \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\") " pod="openstack/horizon-5c865cc775-z8ptx" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.821253 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-scripts\") pod \"horizon-5c865cc775-z8ptx\" (UID: \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\") " pod="openstack/horizon-5c865cc775-z8ptx" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.821384 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fczr4\" (UniqueName: \"kubernetes.io/projected/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-kube-api-access-fczr4\") pod \"horizon-5c865cc775-z8ptx\" (UID: \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\") " pod="openstack/horizon-5c865cc775-z8ptx" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.821483 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-logs\") pod \"horizon-5c865cc775-z8ptx\" (UID: \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\") " pod="openstack/horizon-5c865cc775-z8ptx" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.821559 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-config-data\") pod \"horizon-5c865cc775-z8ptx\" (UID: \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\") " pod="openstack/horizon-5c865cc775-z8ptx" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.821650 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.821705 5014 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.821753 5014 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.821821 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.821876 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/080ff429-38a5-459f-b650-9090593c1da1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.822307 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-scripts\") pod \"horizon-5c865cc775-z8ptx\" (UID: \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\") " pod="openstack/horizon-5c865cc775-z8ptx" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.829567 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-logs\") pod \"horizon-5c865cc775-z8ptx\" (UID: \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\") " pod="openstack/horizon-5c865cc775-z8ptx" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.830181 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-config-data\") pod \"horizon-5c865cc775-z8ptx\" (UID: \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\") " pod="openstack/horizon-5c865cc775-z8ptx" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.831653 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-horizon-secret-key\") pod \"horizon-5c865cc775-z8ptx\" (UID: \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\") " pod="openstack/horizon-5c865cc775-z8ptx" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.842359 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.848235 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fczr4\" (UniqueName: \"kubernetes.io/projected/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-kube-api-access-fczr4\") pod \"horizon-5c865cc775-z8ptx\" (UID: \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\") " pod="openstack/horizon-5c865cc775-z8ptx" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.852264 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" event={"ID":"aab6e228-490d-463f-952e-3723bc4b5fad","Type":"ContainerStarted","Data":"d7b9cbcbd6e92fd0901908c03b80e28217cce72e5cde770530fc87fd7089a8e9"} Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.888046 5014 scope.go:117] "RemoveContainer" containerID="22bcee49c99849054b19a838ade3a66debb4c8356bb237d122436afecfeb614c" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.891301 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 04:53:13 crc kubenswrapper[5014]: E0228 04:53:13.892854 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22bcee49c99849054b19a838ade3a66debb4c8356bb237d122436afecfeb614c\": container with ID starting with 22bcee49c99849054b19a838ade3a66debb4c8356bb237d122436afecfeb614c not found: ID does not exist" containerID="22bcee49c99849054b19a838ade3a66debb4c8356bb237d122436afecfeb614c" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.892890 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22bcee49c99849054b19a838ade3a66debb4c8356bb237d122436afecfeb614c"} err="failed to get container status \"22bcee49c99849054b19a838ade3a66debb4c8356bb237d122436afecfeb614c\": rpc error: code = NotFound desc = could not find container \"22bcee49c99849054b19a838ade3a66debb4c8356bb237d122436afecfeb614c\": container with ID starting with 22bcee49c99849054b19a838ade3a66debb4c8356bb237d122436afecfeb614c not found: ID does not exist" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.892913 5014 scope.go:117] "RemoveContainer" containerID="8788d56d128881efde211114b2e42f10824c84a67c38a8f3a37237ee67632811" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.892939 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 04:53:13 crc kubenswrapper[5014]: E0228 04:53:13.893532 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8788d56d128881efde211114b2e42f10824c84a67c38a8f3a37237ee67632811\": container with ID starting with 8788d56d128881efde211114b2e42f10824c84a67c38a8f3a37237ee67632811 not found: ID does not exist" containerID="8788d56d128881efde211114b2e42f10824c84a67c38a8f3a37237ee67632811" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.893552 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8788d56d128881efde211114b2e42f10824c84a67c38a8f3a37237ee67632811"} err="failed to get container status \"8788d56d128881efde211114b2e42f10824c84a67c38a8f3a37237ee67632811\": rpc error: code = NotFound desc = could not find container \"8788d56d128881efde211114b2e42f10824c84a67c38a8f3a37237ee67632811\": container with ID starting with 8788d56d128881efde211114b2e42f10824c84a67c38a8f3a37237ee67632811 not found: ID does not exist" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.893564 5014 scope.go:117] "RemoveContainer" containerID="cd48aefdfe39d08e6e3457b5add12388a50cf836338b3ff40f39d367327253ac" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.895357 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.904332 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.928022 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-config-data\") pod \"glance-default-external-api-0\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.928147 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.928173 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.928245 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-scripts\") pod \"glance-default-external-api-0\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.928310 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qf4p\" (UniqueName: \"kubernetes.io/projected/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-kube-api-access-5qf4p\") pod \"glance-default-external-api-0\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.928615 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.928645 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-logs\") pod \"glance-default-external-api-0\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.934441 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-8bxvr"] Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.943941 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5959f8865f-8bxvr"] Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.950248 5014 scope.go:117] "RemoveContainer" containerID="cd48aefdfe39d08e6e3457b5add12388a50cf836338b3ff40f39d367327253ac" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.953106 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-mzbcc" podStartSLOduration=3.953085946 podStartE2EDuration="3.953085946s" podCreationTimestamp="2026-02-28 04:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:53:13.842245107 +0000 UTC m=+1182.512371017" watchObservedRunningTime="2026-02-28 04:53:13.953085946 +0000 UTC m=+1182.623211856" Feb 28 04:53:13 crc kubenswrapper[5014]: E0228 04:53:13.953969 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd48aefdfe39d08e6e3457b5add12388a50cf836338b3ff40f39d367327253ac\": container with ID starting with cd48aefdfe39d08e6e3457b5add12388a50cf836338b3ff40f39d367327253ac not found: ID does not exist" containerID="cd48aefdfe39d08e6e3457b5add12388a50cf836338b3ff40f39d367327253ac" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.954020 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd48aefdfe39d08e6e3457b5add12388a50cf836338b3ff40f39d367327253ac"} err="failed to get container status \"cd48aefdfe39d08e6e3457b5add12388a50cf836338b3ff40f39d367327253ac\": rpc error: code = NotFound desc = could not find container \"cd48aefdfe39d08e6e3457b5add12388a50cf836338b3ff40f39d367327253ac\": container with ID starting with cd48aefdfe39d08e6e3457b5add12388a50cf836338b3ff40f39d367327253ac not found: ID does not exist" Feb 28 04:53:13 crc kubenswrapper[5014]: I0228 04:53:13.995585 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c865cc775-z8ptx" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.021886 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-vqwf8"] Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.031530 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-vqwf8"] Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.036184 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-scripts\") pod \"glance-default-external-api-0\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.036222 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.036265 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qf4p\" (UniqueName: \"kubernetes.io/projected/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-kube-api-access-5qf4p\") pod \"glance-default-external-api-0\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.036282 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwzbh\" (UniqueName: \"kubernetes.io/projected/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-kube-api-access-bwzbh\") pod \"glance-default-internal-api-0\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.036324 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.036342 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-logs\") pod \"glance-default-internal-api-0\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.036364 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.036386 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.036406 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-logs\") pod \"glance-default-external-api-0\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.036451 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.036473 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.036490 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-config-data\") pod \"glance-default-external-api-0\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.036527 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.036544 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.037509 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-logs\") pod \"glance-default-external-api-0\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.042933 5014 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.043770 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.045275 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.048308 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-config-data\") pod \"glance-default-external-api-0\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.055097 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-scripts\") pod \"glance-default-external-api-0\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.061767 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qf4p\" (UniqueName: \"kubernetes.io/projected/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-kube-api-access-5qf4p\") pod \"glance-default-external-api-0\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.084113 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.138696 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.138984 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-logs\") pod \"glance-default-internal-api-0\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.139009 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.139064 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.139086 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.139134 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.139165 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwzbh\" (UniqueName: \"kubernetes.io/projected/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-kube-api-access-bwzbh\") pod \"glance-default-internal-api-0\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.139993 5014 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.140288 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-logs\") pod \"glance-default-internal-api-0\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.140598 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.144103 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.153646 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.154111 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.190198 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.195492 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwzbh\" (UniqueName: \"kubernetes.io/projected/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-kube-api-access-bwzbh\") pod \"glance-default-internal-api-0\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.201832 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="080ff429-38a5-459f-b650-9090593c1da1" path="/var/lib/kubelet/pods/080ff429-38a5-459f-b650-9090593c1da1/volumes" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.203708 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce75899e-c98a-4483-940a-f5b67166ced5" path="/var/lib/kubelet/pods/ce75899e-c98a-4483-940a-f5b67166ced5/volumes" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.242874 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.327159 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.448738 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfc9f\" (UniqueName: \"kubernetes.io/projected/aab6e228-490d-463f-952e-3723bc4b5fad-kube-api-access-vfc9f\") pod \"aab6e228-490d-463f-952e-3723bc4b5fad\" (UID: \"aab6e228-490d-463f-952e-3723bc4b5fad\") " Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.448826 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-dns-svc\") pod \"aab6e228-490d-463f-952e-3723bc4b5fad\" (UID: \"aab6e228-490d-463f-952e-3723bc4b5fad\") " Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.448884 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-dns-swift-storage-0\") pod \"aab6e228-490d-463f-952e-3723bc4b5fad\" (UID: \"aab6e228-490d-463f-952e-3723bc4b5fad\") " Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.449012 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-ovsdbserver-sb\") pod \"aab6e228-490d-463f-952e-3723bc4b5fad\" (UID: \"aab6e228-490d-463f-952e-3723bc4b5fad\") " Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.449059 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-config\") pod \"aab6e228-490d-463f-952e-3723bc4b5fad\" (UID: \"aab6e228-490d-463f-952e-3723bc4b5fad\") " Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.449100 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-ovsdbserver-nb\") pod \"aab6e228-490d-463f-952e-3723bc4b5fad\" (UID: \"aab6e228-490d-463f-952e-3723bc4b5fad\") " Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.468462 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aab6e228-490d-463f-952e-3723bc4b5fad-kube-api-access-vfc9f" (OuterVolumeSpecName: "kube-api-access-vfc9f") pod "aab6e228-490d-463f-952e-3723bc4b5fad" (UID: "aab6e228-490d-463f-952e-3723bc4b5fad"). InnerVolumeSpecName "kube-api-access-vfc9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.489561 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "aab6e228-490d-463f-952e-3723bc4b5fad" (UID: "aab6e228-490d-463f-952e-3723bc4b5fad"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.490235 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-config" (OuterVolumeSpecName: "config") pod "aab6e228-490d-463f-952e-3723bc4b5fad" (UID: "aab6e228-490d-463f-952e-3723bc4b5fad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.493436 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "aab6e228-490d-463f-952e-3723bc4b5fad" (UID: "aab6e228-490d-463f-952e-3723bc4b5fad"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.494113 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "aab6e228-490d-463f-952e-3723bc4b5fad" (UID: "aab6e228-490d-463f-952e-3723bc4b5fad"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.511822 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "aab6e228-490d-463f-952e-3723bc4b5fad" (UID: "aab6e228-490d-463f-952e-3723bc4b5fad"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.518570 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.527711 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-sv6qs"] Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.553896 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.553931 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfc9f\" (UniqueName: \"kubernetes.io/projected/aab6e228-490d-463f-952e-3723bc4b5fad-kube-api-access-vfc9f\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.553941 5014 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.553949 5014 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.553960 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.553968 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aab6e228-490d-463f-952e-3723bc4b5fad-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:14 crc kubenswrapper[5014]: W0228 04:53:14.554564 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod651265e8_74ac_412e_a823_a7e19b2c04b6.slice/crio-b159b17ef63f98fd4b0150912c8a76584e5f1ceec75238b366613cbf6ba11f2f WatchSource:0}: Error finding container b159b17ef63f98fd4b0150912c8a76584e5f1ceec75238b366613cbf6ba11f2f: Status 404 returned error can't find the container with id b159b17ef63f98fd4b0150912c8a76584e5f1ceec75238b366613cbf6ba11f2f Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.663949 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c865cc775-z8ptx"] Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.921994 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c865cc775-z8ptx" event={"ID":"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87","Type":"ContainerStarted","Data":"e8d27800f54b70c0e33f6f28be8d6a230530755d3ebedc9244082508b70377f5"} Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.941708 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" event={"ID":"651265e8-74ac-412e-a823-a7e19b2c04b6","Type":"ContainerStarted","Data":"b159b17ef63f98fd4b0150912c8a76584e5f1ceec75238b366613cbf6ba11f2f"} Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.943188 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.954991 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7sqlf" event={"ID":"f1736698-f0bd-493f-a03e-dc1957763f1a","Type":"ContainerStarted","Data":"7943bd947cd43ddf77c62e9460ccfabe22c48e60eb5f82fe013071110b88514c"} Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.966174 5014 generic.go:334] "Generic (PLEG): container finished" podID="aab6e228-490d-463f-952e-3723bc4b5fad" containerID="ea03d0b82f873580c161bf034870d4baeb04ecfa1c3f5d1e629fc67a9e81dda2" exitCode=0 Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.966476 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" event={"ID":"aab6e228-490d-463f-952e-3723bc4b5fad","Type":"ContainerDied","Data":"d7b9cbcbd6e92fd0901908c03b80e28217cce72e5cde770530fc87fd7089a8e9"} Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.966712 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" event={"ID":"aab6e228-490d-463f-952e-3723bc4b5fad","Type":"ContainerDied","Data":"ea03d0b82f873580c161bf034870d4baeb04ecfa1c3f5d1e629fc67a9e81dda2"} Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.966832 5014 scope.go:117] "RemoveContainer" containerID="ea03d0b82f873580c161bf034870d4baeb04ecfa1c3f5d1e629fc67a9e81dda2" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.967173 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58dd9ff6bc-qvmzc" Feb 28 04:53:14 crc kubenswrapper[5014]: I0228 04:53:14.979612 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-7sqlf" podStartSLOduration=3.9795983919999998 podStartE2EDuration="3.979598392s" podCreationTimestamp="2026-02-28 04:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:53:14.978264095 +0000 UTC m=+1183.648390005" watchObservedRunningTime="2026-02-28 04:53:14.979598392 +0000 UTC m=+1183.649724302" Feb 28 04:53:15 crc kubenswrapper[5014]: I0228 04:53:15.053672 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-qvmzc"] Feb 28 04:53:15 crc kubenswrapper[5014]: I0228 04:53:15.070612 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58dd9ff6bc-qvmzc"] Feb 28 04:53:15 crc kubenswrapper[5014]: I0228 04:53:15.132911 5014 scope.go:117] "RemoveContainer" containerID="ea03d0b82f873580c161bf034870d4baeb04ecfa1c3f5d1e629fc67a9e81dda2" Feb 28 04:53:15 crc kubenswrapper[5014]: E0228 04:53:15.134105 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea03d0b82f873580c161bf034870d4baeb04ecfa1c3f5d1e629fc67a9e81dda2\": container with ID starting with ea03d0b82f873580c161bf034870d4baeb04ecfa1c3f5d1e629fc67a9e81dda2 not found: ID does not exist" containerID="ea03d0b82f873580c161bf034870d4baeb04ecfa1c3f5d1e629fc67a9e81dda2" Feb 28 04:53:15 crc kubenswrapper[5014]: I0228 04:53:15.134131 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea03d0b82f873580c161bf034870d4baeb04ecfa1c3f5d1e629fc67a9e81dda2"} err="failed to get container status \"ea03d0b82f873580c161bf034870d4baeb04ecfa1c3f5d1e629fc67a9e81dda2\": rpc error: code = NotFound desc = could not find container \"ea03d0b82f873580c161bf034870d4baeb04ecfa1c3f5d1e629fc67a9e81dda2\": container with ID starting with ea03d0b82f873580c161bf034870d4baeb04ecfa1c3f5d1e629fc67a9e81dda2 not found: ID does not exist" Feb 28 04:53:15 crc kubenswrapper[5014]: I0228 04:53:15.222247 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 04:53:15 crc kubenswrapper[5014]: W0228 04:53:15.298928 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee3ace82_5482_4fd0_9e7c_ea9b67d7fdb3.slice/crio-6c74099feae3d9689ca72a0316e43b21ffe1795fa5c5b0790de631eb9a57e642 WatchSource:0}: Error finding container 6c74099feae3d9689ca72a0316e43b21ffe1795fa5c5b0790de631eb9a57e642: Status 404 returned error can't find the container with id 6c74099feae3d9689ca72a0316e43b21ffe1795fa5c5b0790de631eb9a57e642 Feb 28 04:53:15 crc kubenswrapper[5014]: I0228 04:53:15.706249 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:53:15 crc kubenswrapper[5014]: I0228 04:53:15.706301 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:53:16 crc kubenswrapper[5014]: I0228 04:53:16.003940 5014 generic.go:334] "Generic (PLEG): container finished" podID="651265e8-74ac-412e-a823-a7e19b2c04b6" containerID="04122b4e4a5ff26b82494562223746343a136fbfca497e59f68bb121aebe9c97" exitCode=0 Feb 28 04:53:16 crc kubenswrapper[5014]: I0228 04:53:16.005076 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" event={"ID":"651265e8-74ac-412e-a823-a7e19b2c04b6","Type":"ContainerDied","Data":"04122b4e4a5ff26b82494562223746343a136fbfca497e59f68bb121aebe9c97"} Feb 28 04:53:16 crc kubenswrapper[5014]: I0228 04:53:16.045294 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3","Type":"ContainerStarted","Data":"6c74099feae3d9689ca72a0316e43b21ffe1795fa5c5b0790de631eb9a57e642"} Feb 28 04:53:16 crc kubenswrapper[5014]: I0228 04:53:16.066154 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e12dde4-3f6a-4ddc-b8bc-385ccc197453","Type":"ContainerStarted","Data":"3815023cd78b239a5aafc0f6a33abe3ebc69ea49b153df0bc3f82c37c62a76d1"} Feb 28 04:53:16 crc kubenswrapper[5014]: I0228 04:53:16.196766 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aab6e228-490d-463f-952e-3723bc4b5fad" path="/var/lib/kubelet/pods/aab6e228-490d-463f-952e-3723bc4b5fad/volumes" Feb 28 04:53:17 crc kubenswrapper[5014]: I0228 04:53:17.082054 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" event={"ID":"651265e8-74ac-412e-a823-a7e19b2c04b6","Type":"ContainerStarted","Data":"55a544d313216d8183984f8ef62ce60d0445fdc4c04a104b1b368cea381a6fba"} Feb 28 04:53:17 crc kubenswrapper[5014]: I0228 04:53:17.082633 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" Feb 28 04:53:17 crc kubenswrapper[5014]: I0228 04:53:17.092951 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3","Type":"ContainerStarted","Data":"3cd7d471e585c59b7642d41b1b51aaf5dc3ea9903eb5aed1127b7cfbd18c5f7f"} Feb 28 04:53:17 crc kubenswrapper[5014]: I0228 04:53:17.094770 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e12dde4-3f6a-4ddc-b8bc-385ccc197453","Type":"ContainerStarted","Data":"76be9aea2ff31ef982d06969a0d7098ae12d18a9057baaa82d3b046c794b6ee7"} Feb 28 04:53:17 crc kubenswrapper[5014]: I0228 04:53:17.109638 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" podStartSLOduration=5.109619866 podStartE2EDuration="5.109619866s" podCreationTimestamp="2026-02-28 04:53:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:53:17.101907348 +0000 UTC m=+1185.772033258" watchObservedRunningTime="2026-02-28 04:53:17.109619866 +0000 UTC m=+1185.779745766" Feb 28 04:53:18 crc kubenswrapper[5014]: I0228 04:53:18.117768 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e12dde4-3f6a-4ddc-b8bc-385ccc197453","Type":"ContainerStarted","Data":"f5a8424c7e572f058f25fc8dbe43473333257942cc0b83a235f7502a1d6af6b5"} Feb 28 04:53:18 crc kubenswrapper[5014]: I0228 04:53:18.153108 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3","Type":"ContainerStarted","Data":"30385ce18f5065cf1204f32a63834b563f809d5116e2abdb1b2c7059356af00b"} Feb 28 04:53:18 crc kubenswrapper[5014]: I0228 04:53:18.175304 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.175283986 podStartE2EDuration="5.175283986s" podCreationTimestamp="2026-02-28 04:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:53:18.174858205 +0000 UTC m=+1186.844984115" watchObservedRunningTime="2026-02-28 04:53:18.175283986 +0000 UTC m=+1186.845409906" Feb 28 04:53:18 crc kubenswrapper[5014]: I0228 04:53:18.176451 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.176444208 podStartE2EDuration="5.176444208s" podCreationTimestamp="2026-02-28 04:53:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:53:18.144595748 +0000 UTC m=+1186.814721668" watchObservedRunningTime="2026-02-28 04:53:18.176444208 +0000 UTC m=+1186.846570118" Feb 28 04:53:23 crc kubenswrapper[5014]: I0228 04:53:23.203601 5014 generic.go:334] "Generic (PLEG): container finished" podID="c00825d9-a4c0-40d9-b77c-e1661747f42d" containerID="1dfc961674ce32798797b2b57b0df42b1f6a3fdec1ff279e9a2b12082e3ccd9d" exitCode=0 Feb 28 04:53:23 crc kubenswrapper[5014]: I0228 04:53:23.203698 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mzbcc" event={"ID":"c00825d9-a4c0-40d9-b77c-e1661747f42d","Type":"ContainerDied","Data":"1dfc961674ce32798797b2b57b0df42b1f6a3fdec1ff279e9a2b12082e3ccd9d"} Feb 28 04:53:23 crc kubenswrapper[5014]: I0228 04:53:23.543390 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 04:53:23 crc kubenswrapper[5014]: I0228 04:53:23.543887 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1e12dde4-3f6a-4ddc-b8bc-385ccc197453" containerName="glance-log" containerID="cri-o://76be9aea2ff31ef982d06969a0d7098ae12d18a9057baaa82d3b046c794b6ee7" gracePeriod=30 Feb 28 04:53:23 crc kubenswrapper[5014]: I0228 04:53:23.543960 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="1e12dde4-3f6a-4ddc-b8bc-385ccc197453" containerName="glance-httpd" containerID="cri-o://f5a8424c7e572f058f25fc8dbe43473333257942cc0b83a235f7502a1d6af6b5" gracePeriod=30 Feb 28 04:53:23 crc kubenswrapper[5014]: I0228 04:53:23.545884 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" Feb 28 04:53:23 crc kubenswrapper[5014]: I0228 04:53:23.612275 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-kgdjz"] Feb 28 04:53:23 crc kubenswrapper[5014]: I0228 04:53:23.612482 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-kgdjz" podUID="8b1dde17-8b85-45c0-bef3-a9439be5632e" containerName="dnsmasq-dns" containerID="cri-o://b57907d1126245cbfcac823eeb4015387e57afb28aabba87c1cf311a841e1879" gracePeriod=10 Feb 28 04:53:23 crc kubenswrapper[5014]: I0228 04:53:23.641267 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 04:53:23 crc kubenswrapper[5014]: I0228 04:53:23.641468 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3" containerName="glance-log" containerID="cri-o://3cd7d471e585c59b7642d41b1b51aaf5dc3ea9903eb5aed1127b7cfbd18c5f7f" gracePeriod=30 Feb 28 04:53:23 crc kubenswrapper[5014]: I0228 04:53:23.642344 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3" containerName="glance-httpd" containerID="cri-o://30385ce18f5065cf1204f32a63834b563f809d5116e2abdb1b2c7059356af00b" gracePeriod=30 Feb 28 04:53:23 crc kubenswrapper[5014]: I0228 04:53:23.962639 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-db6f49d9f-4k7d7"] Feb 28 04:53:23 crc kubenswrapper[5014]: I0228 04:53:23.983907 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6cbc78cbb4-6wlp7"] Feb 28 04:53:23 crc kubenswrapper[5014]: E0228 04:53:23.984276 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aab6e228-490d-463f-952e-3723bc4b5fad" containerName="init" Feb 28 04:53:23 crc kubenswrapper[5014]: I0228 04:53:23.984293 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="aab6e228-490d-463f-952e-3723bc4b5fad" containerName="init" Feb 28 04:53:23 crc kubenswrapper[5014]: I0228 04:53:23.984463 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="aab6e228-490d-463f-952e-3723bc4b5fad" containerName="init" Feb 28 04:53:23 crc kubenswrapper[5014]: I0228 04:53:23.985308 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:23 crc kubenswrapper[5014]: I0228 04:53:23.992174 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.003158 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cbc78cbb4-6wlp7"] Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.084529 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c865cc775-z8ptx"] Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.127822 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-c9c88866d-6m8lj"] Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.130855 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.136357 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c9c88866d-6m8lj"] Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.155509 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-config-data\") pod \"horizon-6cbc78cbb4-6wlp7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.155633 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-horizon-secret-key\") pod \"horizon-6cbc78cbb4-6wlp7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.155706 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq49q\" (UniqueName: \"kubernetes.io/projected/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-kube-api-access-lq49q\") pod \"horizon-6cbc78cbb4-6wlp7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.155737 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-logs\") pod \"horizon-6cbc78cbb4-6wlp7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.155764 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-scripts\") pod \"horizon-6cbc78cbb4-6wlp7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.155782 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-combined-ca-bundle\") pod \"horizon-6cbc78cbb4-6wlp7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.155799 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-horizon-tls-certs\") pod \"horizon-6cbc78cbb4-6wlp7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.214736 5014 generic.go:334] "Generic (PLEG): container finished" podID="ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3" containerID="30385ce18f5065cf1204f32a63834b563f809d5116e2abdb1b2c7059356af00b" exitCode=0 Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.214769 5014 generic.go:334] "Generic (PLEG): container finished" podID="ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3" containerID="3cd7d471e585c59b7642d41b1b51aaf5dc3ea9903eb5aed1127b7cfbd18c5f7f" exitCode=143 Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.214891 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3","Type":"ContainerDied","Data":"30385ce18f5065cf1204f32a63834b563f809d5116e2abdb1b2c7059356af00b"} Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.214921 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3","Type":"ContainerDied","Data":"3cd7d471e585c59b7642d41b1b51aaf5dc3ea9903eb5aed1127b7cfbd18c5f7f"} Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.217301 5014 generic.go:334] "Generic (PLEG): container finished" podID="1e12dde4-3f6a-4ddc-b8bc-385ccc197453" containerID="f5a8424c7e572f058f25fc8dbe43473333257942cc0b83a235f7502a1d6af6b5" exitCode=0 Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.217325 5014 generic.go:334] "Generic (PLEG): container finished" podID="1e12dde4-3f6a-4ddc-b8bc-385ccc197453" containerID="76be9aea2ff31ef982d06969a0d7098ae12d18a9057baaa82d3b046c794b6ee7" exitCode=143 Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.217358 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e12dde4-3f6a-4ddc-b8bc-385ccc197453","Type":"ContainerDied","Data":"f5a8424c7e572f058f25fc8dbe43473333257942cc0b83a235f7502a1d6af6b5"} Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.217443 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e12dde4-3f6a-4ddc-b8bc-385ccc197453","Type":"ContainerDied","Data":"76be9aea2ff31ef982d06969a0d7098ae12d18a9057baaa82d3b046c794b6ee7"} Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.219901 5014 generic.go:334] "Generic (PLEG): container finished" podID="8b1dde17-8b85-45c0-bef3-a9439be5632e" containerID="b57907d1126245cbfcac823eeb4015387e57afb28aabba87c1cf311a841e1879" exitCode=0 Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.220037 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-kgdjz" event={"ID":"8b1dde17-8b85-45c0-bef3-a9439be5632e","Type":"ContainerDied","Data":"b57907d1126245cbfcac823eeb4015387e57afb28aabba87c1cf311a841e1879"} Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.257430 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-scripts\") pod \"horizon-6cbc78cbb4-6wlp7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.257723 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-combined-ca-bundle\") pod \"horizon-6cbc78cbb4-6wlp7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.257745 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-horizon-tls-certs\") pod \"horizon-6cbc78cbb4-6wlp7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.257784 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-config-data\") pod \"horizon-6cbc78cbb4-6wlp7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.257817 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ee56420-1b4d-4898-97db-d05756b9bb72-logs\") pod \"horizon-c9c88866d-6m8lj\" (UID: \"6ee56420-1b4d-4898-97db-d05756b9bb72\") " pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.257838 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5cj8\" (UniqueName: \"kubernetes.io/projected/6ee56420-1b4d-4898-97db-d05756b9bb72-kube-api-access-g5cj8\") pod \"horizon-c9c88866d-6m8lj\" (UID: \"6ee56420-1b4d-4898-97db-d05756b9bb72\") " pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.257875 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ee56420-1b4d-4898-97db-d05756b9bb72-horizon-tls-certs\") pod \"horizon-c9c88866d-6m8lj\" (UID: \"6ee56420-1b4d-4898-97db-d05756b9bb72\") " pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.257907 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-horizon-secret-key\") pod \"horizon-6cbc78cbb4-6wlp7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.257948 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ee56420-1b4d-4898-97db-d05756b9bb72-combined-ca-bundle\") pod \"horizon-c9c88866d-6m8lj\" (UID: \"6ee56420-1b4d-4898-97db-d05756b9bb72\") " pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.257968 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ee56420-1b4d-4898-97db-d05756b9bb72-config-data\") pod \"horizon-c9c88866d-6m8lj\" (UID: \"6ee56420-1b4d-4898-97db-d05756b9bb72\") " pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.257993 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ee56420-1b4d-4898-97db-d05756b9bb72-scripts\") pod \"horizon-c9c88866d-6m8lj\" (UID: \"6ee56420-1b4d-4898-97db-d05756b9bb72\") " pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.258010 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lq49q\" (UniqueName: \"kubernetes.io/projected/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-kube-api-access-lq49q\") pod \"horizon-6cbc78cbb4-6wlp7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.258031 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ee56420-1b4d-4898-97db-d05756b9bb72-horizon-secret-key\") pod \"horizon-c9c88866d-6m8lj\" (UID: \"6ee56420-1b4d-4898-97db-d05756b9bb72\") " pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.258051 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-logs\") pod \"horizon-6cbc78cbb4-6wlp7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.258415 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-scripts\") pod \"horizon-6cbc78cbb4-6wlp7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.258437 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-logs\") pod \"horizon-6cbc78cbb4-6wlp7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.264616 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-horizon-secret-key\") pod \"horizon-6cbc78cbb4-6wlp7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.264900 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-config-data\") pod \"horizon-6cbc78cbb4-6wlp7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.266002 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-combined-ca-bundle\") pod \"horizon-6cbc78cbb4-6wlp7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.283726 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-horizon-tls-certs\") pod \"horizon-6cbc78cbb4-6wlp7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.296775 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lq49q\" (UniqueName: \"kubernetes.io/projected/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-kube-api-access-lq49q\") pod \"horizon-6cbc78cbb4-6wlp7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.360083 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ee56420-1b4d-4898-97db-d05756b9bb72-logs\") pod \"horizon-c9c88866d-6m8lj\" (UID: \"6ee56420-1b4d-4898-97db-d05756b9bb72\") " pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.360149 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5cj8\" (UniqueName: \"kubernetes.io/projected/6ee56420-1b4d-4898-97db-d05756b9bb72-kube-api-access-g5cj8\") pod \"horizon-c9c88866d-6m8lj\" (UID: \"6ee56420-1b4d-4898-97db-d05756b9bb72\") " pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.360191 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ee56420-1b4d-4898-97db-d05756b9bb72-horizon-tls-certs\") pod \"horizon-c9c88866d-6m8lj\" (UID: \"6ee56420-1b4d-4898-97db-d05756b9bb72\") " pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.360304 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ee56420-1b4d-4898-97db-d05756b9bb72-combined-ca-bundle\") pod \"horizon-c9c88866d-6m8lj\" (UID: \"6ee56420-1b4d-4898-97db-d05756b9bb72\") " pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.360327 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ee56420-1b4d-4898-97db-d05756b9bb72-config-data\") pod \"horizon-c9c88866d-6m8lj\" (UID: \"6ee56420-1b4d-4898-97db-d05756b9bb72\") " pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.360388 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ee56420-1b4d-4898-97db-d05756b9bb72-scripts\") pod \"horizon-c9c88866d-6m8lj\" (UID: \"6ee56420-1b4d-4898-97db-d05756b9bb72\") " pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.360413 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ee56420-1b4d-4898-97db-d05756b9bb72-horizon-secret-key\") pod \"horizon-c9c88866d-6m8lj\" (UID: \"6ee56420-1b4d-4898-97db-d05756b9bb72\") " pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.361451 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6ee56420-1b4d-4898-97db-d05756b9bb72-logs\") pod \"horizon-c9c88866d-6m8lj\" (UID: \"6ee56420-1b4d-4898-97db-d05756b9bb72\") " pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.363071 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6ee56420-1b4d-4898-97db-d05756b9bb72-scripts\") pod \"horizon-c9c88866d-6m8lj\" (UID: \"6ee56420-1b4d-4898-97db-d05756b9bb72\") " pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.363082 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6ee56420-1b4d-4898-97db-d05756b9bb72-config-data\") pod \"horizon-c9c88866d-6m8lj\" (UID: \"6ee56420-1b4d-4898-97db-d05756b9bb72\") " pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.376527 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6ee56420-1b4d-4898-97db-d05756b9bb72-horizon-tls-certs\") pod \"horizon-c9c88866d-6m8lj\" (UID: \"6ee56420-1b4d-4898-97db-d05756b9bb72\") " pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.377185 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6ee56420-1b4d-4898-97db-d05756b9bb72-horizon-secret-key\") pod \"horizon-c9c88866d-6m8lj\" (UID: \"6ee56420-1b4d-4898-97db-d05756b9bb72\") " pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.377282 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6ee56420-1b4d-4898-97db-d05756b9bb72-combined-ca-bundle\") pod \"horizon-c9c88866d-6m8lj\" (UID: \"6ee56420-1b4d-4898-97db-d05756b9bb72\") " pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.380121 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5cj8\" (UniqueName: \"kubernetes.io/projected/6ee56420-1b4d-4898-97db-d05756b9bb72-kube-api-access-g5cj8\") pod \"horizon-c9c88866d-6m8lj\" (UID: \"6ee56420-1b4d-4898-97db-d05756b9bb72\") " pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.518123 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:24 crc kubenswrapper[5014]: I0228 04:53:24.527005 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:28 crc kubenswrapper[5014]: I0228 04:53:28.319608 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-kgdjz" podUID="8b1dde17-8b85-45c0-bef3-a9439be5632e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: connect: connection refused" Feb 28 04:53:33 crc kubenswrapper[5014]: E0228 04:53:33.120182 5014 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 28 04:53:33 crc kubenswrapper[5014]: E0228 04:53:33.121262 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5fbh654h694hfch574h9fh78h5cdh98h66ch564hbbh66ch5f8hfdh568h5ffh5f8h64chd6h97h7dh676h5fbh65bhc6h5fh6ch5dbhc5h9dh5bfq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mk9r7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6d9d97cc85-t5jdm_openstack(666d76d2-bd8a-4533-86ef-d87c77ed4912): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 04:53:33 crc kubenswrapper[5014]: E0228 04:53:33.124893 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-6d9d97cc85-t5jdm" podUID="666d76d2-bd8a-4533-86ef-d87c77ed4912" Feb 28 04:53:33 crc kubenswrapper[5014]: I0228 04:53:33.319980 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-kgdjz" podUID="8b1dde17-8b85-45c0-bef3-a9439be5632e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: connect: connection refused" Feb 28 04:53:35 crc kubenswrapper[5014]: E0228 04:53:35.072957 5014 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-placement-api:current-podified" Feb 28 04:53:35 crc kubenswrapper[5014]: E0228 04:53:35.073400 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8tll8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-5tgzd_openstack(c5e88418-60bd-44ee-8272-245ee92460c6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 04:53:35 crc kubenswrapper[5014]: E0228 04:53:35.074768 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-5tgzd" podUID="c5e88418-60bd-44ee-8272-245ee92460c6" Feb 28 04:53:35 crc kubenswrapper[5014]: E0228 04:53:35.098766 5014 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 28 04:53:35 crc kubenswrapper[5014]: E0228 04:53:35.098905 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n69h686hd5h549h566h68fhd4hfch559h577h698h8ch5f8h87h5bfh68bh5fdh97h5fdh5fdhd7h9dh554h55ch54ch58h544hd9h7h55dh5f9h67dq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fczr4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5c865cc775-z8ptx_openstack(9ef476e7-cd3d-4eb4-abdd-bd677cb3da87): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 04:53:35 crc kubenswrapper[5014]: E0228 04:53:35.102162 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5c865cc775-z8ptx" podUID="9ef476e7-cd3d-4eb4-abdd-bd677cb3da87" Feb 28 04:53:35 crc kubenswrapper[5014]: E0228 04:53:35.135751 5014 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Feb 28 04:53:35 crc kubenswrapper[5014]: E0228 04:53:35.135922 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n65h559h59dh594h645h5f9hf8hbbhcbh5f9h5cdh649h698h5c8h5dfhc6h5ddh695h5b8h5cdh57chbbhc9h666h66bh5f6h5dfh5fdh67dh87h66ch67q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n84l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-db6f49d9f-4k7d7_openstack(a22e1d3e-80dc-44d0-b199-588397ea177e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 04:53:35 crc kubenswrapper[5014]: E0228 04:53:35.140029 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-db6f49d9f-4k7d7" podUID="a22e1d3e-80dc-44d0-b199-588397ea177e" Feb 28 04:53:35 crc kubenswrapper[5014]: E0228 04:53:35.331549 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-placement-api:current-podified\\\"\"" pod="openstack/placement-db-sync-5tgzd" podUID="c5e88418-60bd-44ee-8272-245ee92460c6" Feb 28 04:53:38 crc kubenswrapper[5014]: I0228 04:53:38.319585 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-kgdjz" podUID="8b1dde17-8b85-45c0-bef3-a9439be5632e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: connect: connection refused" Feb 28 04:53:38 crc kubenswrapper[5014]: I0228 04:53:38.320323 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-kgdjz" Feb 28 04:53:42 crc kubenswrapper[5014]: I0228 04:53:42.773760 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mzbcc" Feb 28 04:53:42 crc kubenswrapper[5014]: I0228 04:53:42.838409 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-config-data\") pod \"c00825d9-a4c0-40d9-b77c-e1661747f42d\" (UID: \"c00825d9-a4c0-40d9-b77c-e1661747f42d\") " Feb 28 04:53:42 crc kubenswrapper[5014]: I0228 04:53:42.838449 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-combined-ca-bundle\") pod \"c00825d9-a4c0-40d9-b77c-e1661747f42d\" (UID: \"c00825d9-a4c0-40d9-b77c-e1661747f42d\") " Feb 28 04:53:42 crc kubenswrapper[5014]: I0228 04:53:42.838475 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-credential-keys\") pod \"c00825d9-a4c0-40d9-b77c-e1661747f42d\" (UID: \"c00825d9-a4c0-40d9-b77c-e1661747f42d\") " Feb 28 04:53:42 crc kubenswrapper[5014]: I0228 04:53:42.838561 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-scripts\") pod \"c00825d9-a4c0-40d9-b77c-e1661747f42d\" (UID: \"c00825d9-a4c0-40d9-b77c-e1661747f42d\") " Feb 28 04:53:42 crc kubenswrapper[5014]: I0228 04:53:42.838666 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-fernet-keys\") pod \"c00825d9-a4c0-40d9-b77c-e1661747f42d\" (UID: \"c00825d9-a4c0-40d9-b77c-e1661747f42d\") " Feb 28 04:53:42 crc kubenswrapper[5014]: I0228 04:53:42.838737 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hgdh\" (UniqueName: \"kubernetes.io/projected/c00825d9-a4c0-40d9-b77c-e1661747f42d-kube-api-access-7hgdh\") pod \"c00825d9-a4c0-40d9-b77c-e1661747f42d\" (UID: \"c00825d9-a4c0-40d9-b77c-e1661747f42d\") " Feb 28 04:53:42 crc kubenswrapper[5014]: I0228 04:53:42.846949 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "c00825d9-a4c0-40d9-b77c-e1661747f42d" (UID: "c00825d9-a4c0-40d9-b77c-e1661747f42d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:42 crc kubenswrapper[5014]: I0228 04:53:42.848624 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c00825d9-a4c0-40d9-b77c-e1661747f42d-kube-api-access-7hgdh" (OuterVolumeSpecName: "kube-api-access-7hgdh") pod "c00825d9-a4c0-40d9-b77c-e1661747f42d" (UID: "c00825d9-a4c0-40d9-b77c-e1661747f42d"). InnerVolumeSpecName "kube-api-access-7hgdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:53:42 crc kubenswrapper[5014]: I0228 04:53:42.849861 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-scripts" (OuterVolumeSpecName: "scripts") pod "c00825d9-a4c0-40d9-b77c-e1661747f42d" (UID: "c00825d9-a4c0-40d9-b77c-e1661747f42d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:42 crc kubenswrapper[5014]: I0228 04:53:42.852951 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c00825d9-a4c0-40d9-b77c-e1661747f42d" (UID: "c00825d9-a4c0-40d9-b77c-e1661747f42d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:42 crc kubenswrapper[5014]: I0228 04:53:42.880083 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c00825d9-a4c0-40d9-b77c-e1661747f42d" (UID: "c00825d9-a4c0-40d9-b77c-e1661747f42d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:42 crc kubenswrapper[5014]: I0228 04:53:42.885369 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-config-data" (OuterVolumeSpecName: "config-data") pod "c00825d9-a4c0-40d9-b77c-e1661747f42d" (UID: "c00825d9-a4c0-40d9-b77c-e1661747f42d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:42 crc kubenswrapper[5014]: I0228 04:53:42.942109 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:42 crc kubenswrapper[5014]: I0228 04:53:42.942153 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:42 crc kubenswrapper[5014]: I0228 04:53:42.942168 5014 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:42 crc kubenswrapper[5014]: I0228 04:53:42.942179 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:42 crc kubenswrapper[5014]: I0228 04:53:42.942189 5014 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c00825d9-a4c0-40d9-b77c-e1661747f42d-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:42 crc kubenswrapper[5014]: I0228 04:53:42.942202 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hgdh\" (UniqueName: \"kubernetes.io/projected/c00825d9-a4c0-40d9-b77c-e1661747f42d-kube-api-access-7hgdh\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:43 crc kubenswrapper[5014]: E0228 04:53:43.308830 5014 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 28 04:53:43 crc kubenswrapper[5014]: E0228 04:53:43.309027 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v5c7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-wxq9x_openstack(57f91015-35f5-486c-a88c-0a90f76724e5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 04:53:43 crc kubenswrapper[5014]: E0228 04:53:43.310287 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-wxq9x" podUID="57f91015-35f5-486c-a88c-0a90f76724e5" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.320291 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-kgdjz" podUID="8b1dde17-8b85-45c0-bef3-a9439be5632e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.115:5353: connect: connection refused" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.374268 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d9d97cc85-t5jdm" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.416671 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c865cc775-z8ptx" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.419345 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-db6f49d9f-4k7d7" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.450222 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-config-data\") pod \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\" (UID: \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\") " Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.450297 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/666d76d2-bd8a-4533-86ef-d87c77ed4912-logs\") pod \"666d76d2-bd8a-4533-86ef-d87c77ed4912\" (UID: \"666d76d2-bd8a-4533-86ef-d87c77ed4912\") " Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.450321 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n84l9\" (UniqueName: \"kubernetes.io/projected/a22e1d3e-80dc-44d0-b199-588397ea177e-kube-api-access-n84l9\") pod \"a22e1d3e-80dc-44d0-b199-588397ea177e\" (UID: \"a22e1d3e-80dc-44d0-b199-588397ea177e\") " Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.450355 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-horizon-secret-key\") pod \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\" (UID: \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\") " Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.450388 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a22e1d3e-80dc-44d0-b199-588397ea177e-logs\") pod \"a22e1d3e-80dc-44d0-b199-588397ea177e\" (UID: \"a22e1d3e-80dc-44d0-b199-588397ea177e\") " Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.450440 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/666d76d2-bd8a-4533-86ef-d87c77ed4912-config-data\") pod \"666d76d2-bd8a-4533-86ef-d87c77ed4912\" (UID: \"666d76d2-bd8a-4533-86ef-d87c77ed4912\") " Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.450473 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-scripts\") pod \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\" (UID: \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\") " Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.450513 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-logs\") pod \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\" (UID: \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\") " Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.450535 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fczr4\" (UniqueName: \"kubernetes.io/projected/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-kube-api-access-fczr4\") pod \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\" (UID: \"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87\") " Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.450553 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a22e1d3e-80dc-44d0-b199-588397ea177e-scripts\") pod \"a22e1d3e-80dc-44d0-b199-588397ea177e\" (UID: \"a22e1d3e-80dc-44d0-b199-588397ea177e\") " Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.450571 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/666d76d2-bd8a-4533-86ef-d87c77ed4912-horizon-secret-key\") pod \"666d76d2-bd8a-4533-86ef-d87c77ed4912\" (UID: \"666d76d2-bd8a-4533-86ef-d87c77ed4912\") " Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.450589 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk9r7\" (UniqueName: \"kubernetes.io/projected/666d76d2-bd8a-4533-86ef-d87c77ed4912-kube-api-access-mk9r7\") pod \"666d76d2-bd8a-4533-86ef-d87c77ed4912\" (UID: \"666d76d2-bd8a-4533-86ef-d87c77ed4912\") " Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.450612 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/666d76d2-bd8a-4533-86ef-d87c77ed4912-scripts\") pod \"666d76d2-bd8a-4533-86ef-d87c77ed4912\" (UID: \"666d76d2-bd8a-4533-86ef-d87c77ed4912\") " Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.450645 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a22e1d3e-80dc-44d0-b199-588397ea177e-config-data\") pod \"a22e1d3e-80dc-44d0-b199-588397ea177e\" (UID: \"a22e1d3e-80dc-44d0-b199-588397ea177e\") " Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.450663 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a22e1d3e-80dc-44d0-b199-588397ea177e-horizon-secret-key\") pod \"a22e1d3e-80dc-44d0-b199-588397ea177e\" (UID: \"a22e1d3e-80dc-44d0-b199-588397ea177e\") " Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.450949 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a22e1d3e-80dc-44d0-b199-588397ea177e-logs" (OuterVolumeSpecName: "logs") pod "a22e1d3e-80dc-44d0-b199-588397ea177e" (UID: "a22e1d3e-80dc-44d0-b199-588397ea177e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.451306 5014 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a22e1d3e-80dc-44d0-b199-588397ea177e-logs\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.451875 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/666d76d2-bd8a-4533-86ef-d87c77ed4912-logs" (OuterVolumeSpecName: "logs") pod "666d76d2-bd8a-4533-86ef-d87c77ed4912" (UID: "666d76d2-bd8a-4533-86ef-d87c77ed4912"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.451739 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-config-data" (OuterVolumeSpecName: "config-data") pod "9ef476e7-cd3d-4eb4-abdd-bd677cb3da87" (UID: "9ef476e7-cd3d-4eb4-abdd-bd677cb3da87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.452228 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-scripts" (OuterVolumeSpecName: "scripts") pod "9ef476e7-cd3d-4eb4-abdd-bd677cb3da87" (UID: "9ef476e7-cd3d-4eb4-abdd-bd677cb3da87"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.452340 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/666d76d2-bd8a-4533-86ef-d87c77ed4912-scripts" (OuterVolumeSpecName: "scripts") pod "666d76d2-bd8a-4533-86ef-d87c77ed4912" (UID: "666d76d2-bd8a-4533-86ef-d87c77ed4912"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.452606 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a22e1d3e-80dc-44d0-b199-588397ea177e-scripts" (OuterVolumeSpecName: "scripts") pod "a22e1d3e-80dc-44d0-b199-588397ea177e" (UID: "a22e1d3e-80dc-44d0-b199-588397ea177e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.453264 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-logs" (OuterVolumeSpecName: "logs") pod "9ef476e7-cd3d-4eb4-abdd-bd677cb3da87" (UID: "9ef476e7-cd3d-4eb4-abdd-bd677cb3da87"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.453383 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a22e1d3e-80dc-44d0-b199-588397ea177e-config-data" (OuterVolumeSpecName: "config-data") pod "a22e1d3e-80dc-44d0-b199-588397ea177e" (UID: "a22e1d3e-80dc-44d0-b199-588397ea177e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.453598 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-mzbcc" event={"ID":"c00825d9-a4c0-40d9-b77c-e1661747f42d","Type":"ContainerDied","Data":"69eea2e5d56c68426b26e5bae43b07ccb9ff367b85ce49913d222600b9c1e2b1"} Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.453636 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69eea2e5d56c68426b26e5bae43b07ccb9ff367b85ce49913d222600b9c1e2b1" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.453662 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/666d76d2-bd8a-4533-86ef-d87c77ed4912-config-data" (OuterVolumeSpecName: "config-data") pod "666d76d2-bd8a-4533-86ef-d87c77ed4912" (UID: "666d76d2-bd8a-4533-86ef-d87c77ed4912"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.453853 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-mzbcc" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.454638 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9ef476e7-cd3d-4eb4-abdd-bd677cb3da87" (UID: "9ef476e7-cd3d-4eb4-abdd-bd677cb3da87"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.455268 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a22e1d3e-80dc-44d0-b199-588397ea177e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a22e1d3e-80dc-44d0-b199-588397ea177e" (UID: "a22e1d3e-80dc-44d0-b199-588397ea177e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.456053 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6d9d97cc85-t5jdm" event={"ID":"666d76d2-bd8a-4533-86ef-d87c77ed4912","Type":"ContainerDied","Data":"9defa71ba224f01bf8368aaaeff8a4ae49bca8894f72098a9a1233122cd7e4c7"} Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.456070 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6d9d97cc85-t5jdm" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.456647 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/666d76d2-bd8a-4533-86ef-d87c77ed4912-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "666d76d2-bd8a-4533-86ef-d87c77ed4912" (UID: "666d76d2-bd8a-4533-86ef-d87c77ed4912"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.461449 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/666d76d2-bd8a-4533-86ef-d87c77ed4912-kube-api-access-mk9r7" (OuterVolumeSpecName: "kube-api-access-mk9r7") pod "666d76d2-bd8a-4533-86ef-d87c77ed4912" (UID: "666d76d2-bd8a-4533-86ef-d87c77ed4912"). InnerVolumeSpecName "kube-api-access-mk9r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.464178 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-kube-api-access-fczr4" (OuterVolumeSpecName: "kube-api-access-fczr4") pod "9ef476e7-cd3d-4eb4-abdd-bd677cb3da87" (UID: "9ef476e7-cd3d-4eb4-abdd-bd677cb3da87"). InnerVolumeSpecName "kube-api-access-fczr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.468686 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-db6f49d9f-4k7d7" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.468707 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-db6f49d9f-4k7d7" event={"ID":"a22e1d3e-80dc-44d0-b199-588397ea177e","Type":"ContainerDied","Data":"68851ea4829ae2eef053801406a032b909f3f876a46073a6436122b31fb88530"} Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.470219 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c865cc775-z8ptx" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.470268 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c865cc775-z8ptx" event={"ID":"9ef476e7-cd3d-4eb4-abdd-bd677cb3da87","Type":"ContainerDied","Data":"e8d27800f54b70c0e33f6f28be8d6a230530755d3ebedc9244082508b70377f5"} Feb 28 04:53:43 crc kubenswrapper[5014]: E0228 04:53:43.472912 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-wxq9x" podUID="57f91015-35f5-486c-a88c-0a90f76724e5" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.473966 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a22e1d3e-80dc-44d0-b199-588397ea177e-kube-api-access-n84l9" (OuterVolumeSpecName: "kube-api-access-n84l9") pod "a22e1d3e-80dc-44d0-b199-588397ea177e" (UID: "a22e1d3e-80dc-44d0-b199-588397ea177e"). InnerVolumeSpecName "kube-api-access-n84l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.556389 5014 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.556426 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/666d76d2-bd8a-4533-86ef-d87c77ed4912-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.556440 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.556451 5014 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-logs\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.556463 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fczr4\" (UniqueName: \"kubernetes.io/projected/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-kube-api-access-fczr4\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.556473 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a22e1d3e-80dc-44d0-b199-588397ea177e-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.556483 5014 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/666d76d2-bd8a-4533-86ef-d87c77ed4912-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.556494 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk9r7\" (UniqueName: \"kubernetes.io/projected/666d76d2-bd8a-4533-86ef-d87c77ed4912-kube-api-access-mk9r7\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.556503 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/666d76d2-bd8a-4533-86ef-d87c77ed4912-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.556512 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a22e1d3e-80dc-44d0-b199-588397ea177e-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.556522 5014 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a22e1d3e-80dc-44d0-b199-588397ea177e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.556533 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.556542 5014 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/666d76d2-bd8a-4533-86ef-d87c77ed4912-logs\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.556555 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n84l9\" (UniqueName: \"kubernetes.io/projected/a22e1d3e-80dc-44d0-b199-588397ea177e-kube-api-access-n84l9\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.575090 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c865cc775-z8ptx"] Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.584607 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5c865cc775-z8ptx"] Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.826981 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6d9d97cc85-t5jdm"] Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.842052 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6d9d97cc85-t5jdm"] Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.901847 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-db6f49d9f-4k7d7"] Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.913755 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-db6f49d9f-4k7d7"] Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.922130 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-mzbcc"] Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.931526 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-mzbcc"] Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.967582 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-cmvws"] Feb 28 04:53:43 crc kubenswrapper[5014]: E0228 04:53:43.968188 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c00825d9-a4c0-40d9-b77c-e1661747f42d" containerName="keystone-bootstrap" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.968209 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="c00825d9-a4c0-40d9-b77c-e1661747f42d" containerName="keystone-bootstrap" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.968441 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="c00825d9-a4c0-40d9-b77c-e1661747f42d" containerName="keystone-bootstrap" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.969847 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cmvws" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.971851 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.971984 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zmpcc" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.972238 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.973104 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.973193 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 28 04:53:43 crc kubenswrapper[5014]: I0228 04:53:43.980150 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cmvws"] Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.068609 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbvtt\" (UniqueName: \"kubernetes.io/projected/8c0a6572-64f2-488b-9533-c04957535d16-kube-api-access-mbvtt\") pod \"keystone-bootstrap-cmvws\" (UID: \"8c0a6572-64f2-488b-9533-c04957535d16\") " pod="openstack/keystone-bootstrap-cmvws" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.068659 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-config-data\") pod \"keystone-bootstrap-cmvws\" (UID: \"8c0a6572-64f2-488b-9533-c04957535d16\") " pod="openstack/keystone-bootstrap-cmvws" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.068684 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-credential-keys\") pod \"keystone-bootstrap-cmvws\" (UID: \"8c0a6572-64f2-488b-9533-c04957535d16\") " pod="openstack/keystone-bootstrap-cmvws" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.068871 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-fernet-keys\") pod \"keystone-bootstrap-cmvws\" (UID: \"8c0a6572-64f2-488b-9533-c04957535d16\") " pod="openstack/keystone-bootstrap-cmvws" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.068905 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-combined-ca-bundle\") pod \"keystone-bootstrap-cmvws\" (UID: \"8c0a6572-64f2-488b-9533-c04957535d16\") " pod="openstack/keystone-bootstrap-cmvws" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.069148 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-scripts\") pod \"keystone-bootstrap-cmvws\" (UID: \"8c0a6572-64f2-488b-9533-c04957535d16\") " pod="openstack/keystone-bootstrap-cmvws" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.155045 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.155096 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.170728 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-combined-ca-bundle\") pod \"keystone-bootstrap-cmvws\" (UID: \"8c0a6572-64f2-488b-9533-c04957535d16\") " pod="openstack/keystone-bootstrap-cmvws" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.170882 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-scripts\") pod \"keystone-bootstrap-cmvws\" (UID: \"8c0a6572-64f2-488b-9533-c04957535d16\") " pod="openstack/keystone-bootstrap-cmvws" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.170928 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbvtt\" (UniqueName: \"kubernetes.io/projected/8c0a6572-64f2-488b-9533-c04957535d16-kube-api-access-mbvtt\") pod \"keystone-bootstrap-cmvws\" (UID: \"8c0a6572-64f2-488b-9533-c04957535d16\") " pod="openstack/keystone-bootstrap-cmvws" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.170963 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-config-data\") pod \"keystone-bootstrap-cmvws\" (UID: \"8c0a6572-64f2-488b-9533-c04957535d16\") " pod="openstack/keystone-bootstrap-cmvws" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.170988 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-credential-keys\") pod \"keystone-bootstrap-cmvws\" (UID: \"8c0a6572-64f2-488b-9533-c04957535d16\") " pod="openstack/keystone-bootstrap-cmvws" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.171057 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-fernet-keys\") pod \"keystone-bootstrap-cmvws\" (UID: \"8c0a6572-64f2-488b-9533-c04957535d16\") " pod="openstack/keystone-bootstrap-cmvws" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.177593 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-scripts\") pod \"keystone-bootstrap-cmvws\" (UID: \"8c0a6572-64f2-488b-9533-c04957535d16\") " pod="openstack/keystone-bootstrap-cmvws" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.177927 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-fernet-keys\") pod \"keystone-bootstrap-cmvws\" (UID: \"8c0a6572-64f2-488b-9533-c04957535d16\") " pod="openstack/keystone-bootstrap-cmvws" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.180110 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-combined-ca-bundle\") pod \"keystone-bootstrap-cmvws\" (UID: \"8c0a6572-64f2-488b-9533-c04957535d16\") " pod="openstack/keystone-bootstrap-cmvws" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.181007 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-credential-keys\") pod \"keystone-bootstrap-cmvws\" (UID: \"8c0a6572-64f2-488b-9533-c04957535d16\") " pod="openstack/keystone-bootstrap-cmvws" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.182505 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="666d76d2-bd8a-4533-86ef-d87c77ed4912" path="/var/lib/kubelet/pods/666d76d2-bd8a-4533-86ef-d87c77ed4912/volumes" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.183163 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ef476e7-cd3d-4eb4-abdd-bd677cb3da87" path="/var/lib/kubelet/pods/9ef476e7-cd3d-4eb4-abdd-bd677cb3da87/volumes" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.183736 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a22e1d3e-80dc-44d0-b199-588397ea177e" path="/var/lib/kubelet/pods/a22e1d3e-80dc-44d0-b199-588397ea177e/volumes" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.184250 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c00825d9-a4c0-40d9-b77c-e1661747f42d" path="/var/lib/kubelet/pods/c00825d9-a4c0-40d9-b77c-e1661747f42d/volumes" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.186747 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-config-data\") pod \"keystone-bootstrap-cmvws\" (UID: \"8c0a6572-64f2-488b-9533-c04957535d16\") " pod="openstack/keystone-bootstrap-cmvws" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.192250 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbvtt\" (UniqueName: \"kubernetes.io/projected/8c0a6572-64f2-488b-9533-c04957535d16-kube-api-access-mbvtt\") pod \"keystone-bootstrap-cmvws\" (UID: \"8c0a6572-64f2-488b-9533-c04957535d16\") " pod="openstack/keystone-bootstrap-cmvws" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.294769 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cmvws" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.519235 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.519273 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 28 04:53:44 crc kubenswrapper[5014]: E0228 04:53:44.855126 5014 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 28 04:53:44 crc kubenswrapper[5014]: E0228 04:53:44.855318 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4cdd5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-c9b9j_openstack(1688b2e2-1aaf-49e0-8414-0f12bb079aba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 04:53:44 crc kubenswrapper[5014]: E0228 04:53:44.856460 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-c9b9j" podUID="1688b2e2-1aaf-49e0-8414-0f12bb079aba" Feb 28 04:53:44 crc kubenswrapper[5014]: I0228 04:53:44.998776 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-kgdjz" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.004664 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.017444 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.089772 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-logs\") pod \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.089867 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-combined-ca-bundle\") pod \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.089894 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-config-data\") pod \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.089928 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.089956 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qf4p\" (UniqueName: \"kubernetes.io/projected/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-kube-api-access-5qf4p\") pod \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.090003 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-httpd-run\") pod \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.090034 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b1dde17-8b85-45c0-bef3-a9439be5632e-ovsdbserver-nb\") pod \"8b1dde17-8b85-45c0-bef3-a9439be5632e\" (UID: \"8b1dde17-8b85-45c0-bef3-a9439be5632e\") " Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.090052 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-combined-ca-bundle\") pod \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.090077 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-httpd-run\") pod \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.090116 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-logs\") pod \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.090136 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b1dde17-8b85-45c0-bef3-a9439be5632e-ovsdbserver-sb\") pod \"8b1dde17-8b85-45c0-bef3-a9439be5632e\" (UID: \"8b1dde17-8b85-45c0-bef3-a9439be5632e\") " Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.090165 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-scripts\") pod \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.090202 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1dde17-8b85-45c0-bef3-a9439be5632e-config\") pod \"8b1dde17-8b85-45c0-bef3-a9439be5632e\" (UID: \"8b1dde17-8b85-45c0-bef3-a9439be5632e\") " Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.090235 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-config-data\") pod \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.090275 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b1dde17-8b85-45c0-bef3-a9439be5632e-dns-svc\") pod \"8b1dde17-8b85-45c0-bef3-a9439be5632e\" (UID: \"8b1dde17-8b85-45c0-bef3-a9439be5632e\") " Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.090286 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-logs" (OuterVolumeSpecName: "logs") pod "ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3" (UID: "ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.090323 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\" (UID: \"1e12dde4-3f6a-4ddc-b8bc-385ccc197453\") " Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.090359 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwzbh\" (UniqueName: \"kubernetes.io/projected/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-kube-api-access-bwzbh\") pod \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.090430 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-scripts\") pod \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\" (UID: \"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3\") " Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.090462 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdphb\" (UniqueName: \"kubernetes.io/projected/8b1dde17-8b85-45c0-bef3-a9439be5632e-kube-api-access-qdphb\") pod \"8b1dde17-8b85-45c0-bef3-a9439be5632e\" (UID: \"8b1dde17-8b85-45c0-bef3-a9439be5632e\") " Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.090494 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1e12dde4-3f6a-4ddc-b8bc-385ccc197453" (UID: "1e12dde4-3f6a-4ddc-b8bc-385ccc197453"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.090513 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3" (UID: "ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.090900 5014 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-logs\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.090917 5014 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.090926 5014 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.095395 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3" (UID: "ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.096975 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-kube-api-access-bwzbh" (OuterVolumeSpecName: "kube-api-access-bwzbh") pod "ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3" (UID: "ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3"). InnerVolumeSpecName "kube-api-access-bwzbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.099334 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-scripts" (OuterVolumeSpecName: "scripts") pod "1e12dde4-3f6a-4ddc-b8bc-385ccc197453" (UID: "1e12dde4-3f6a-4ddc-b8bc-385ccc197453"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.099566 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-logs" (OuterVolumeSpecName: "logs") pod "1e12dde4-3f6a-4ddc-b8bc-385ccc197453" (UID: "1e12dde4-3f6a-4ddc-b8bc-385ccc197453"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.100898 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "1e12dde4-3f6a-4ddc-b8bc-385ccc197453" (UID: "1e12dde4-3f6a-4ddc-b8bc-385ccc197453"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.101506 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-kube-api-access-5qf4p" (OuterVolumeSpecName: "kube-api-access-5qf4p") pod "1e12dde4-3f6a-4ddc-b8bc-385ccc197453" (UID: "1e12dde4-3f6a-4ddc-b8bc-385ccc197453"). InnerVolumeSpecName "kube-api-access-5qf4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.104254 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-scripts" (OuterVolumeSpecName: "scripts") pod "ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3" (UID: "ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.119020 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1dde17-8b85-45c0-bef3-a9439be5632e-kube-api-access-qdphb" (OuterVolumeSpecName: "kube-api-access-qdphb") pod "8b1dde17-8b85-45c0-bef3-a9439be5632e" (UID: "8b1dde17-8b85-45c0-bef3-a9439be5632e"). InnerVolumeSpecName "kube-api-access-qdphb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.136137 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e12dde4-3f6a-4ddc-b8bc-385ccc197453" (UID: "1e12dde4-3f6a-4ddc-b8bc-385ccc197453"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.173452 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3" (UID: "ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.183165 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-config-data" (OuterVolumeSpecName: "config-data") pod "1e12dde4-3f6a-4ddc-b8bc-385ccc197453" (UID: "1e12dde4-3f6a-4ddc-b8bc-385ccc197453"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.186472 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b1dde17-8b85-45c0-bef3-a9439be5632e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8b1dde17-8b85-45c0-bef3-a9439be5632e" (UID: "8b1dde17-8b85-45c0-bef3-a9439be5632e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.189459 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b1dde17-8b85-45c0-bef3-a9439be5632e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8b1dde17-8b85-45c0-bef3-a9439be5632e" (UID: "8b1dde17-8b85-45c0-bef3-a9439be5632e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.194019 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdphb\" (UniqueName: \"kubernetes.io/projected/8b1dde17-8b85-45c0-bef3-a9439be5632e-kube-api-access-qdphb\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.194052 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.194086 5014 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.194099 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qf4p\" (UniqueName: \"kubernetes.io/projected/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-kube-api-access-5qf4p\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.194111 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8b1dde17-8b85-45c0-bef3-a9439be5632e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.194123 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.194146 5014 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-logs\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.194158 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8b1dde17-8b85-45c0-bef3-a9439be5632e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.194168 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.194178 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e12dde4-3f6a-4ddc-b8bc-385ccc197453-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.194194 5014 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.194206 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwzbh\" (UniqueName: \"kubernetes.io/projected/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-kube-api-access-bwzbh\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.194218 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.198895 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b1dde17-8b85-45c0-bef3-a9439be5632e-config" (OuterVolumeSpecName: "config") pod "8b1dde17-8b85-45c0-bef3-a9439be5632e" (UID: "8b1dde17-8b85-45c0-bef3-a9439be5632e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.216557 5014 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.216557 5014 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.220997 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-config-data" (OuterVolumeSpecName: "config-data") pod "ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3" (UID: "ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.221390 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b1dde17-8b85-45c0-bef3-a9439be5632e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8b1dde17-8b85-45c0-bef3-a9439be5632e" (UID: "8b1dde17-8b85-45c0-bef3-a9439be5632e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.251432 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-c9c88866d-6m8lj"] Feb 28 04:53:45 crc kubenswrapper[5014]: W0228 04:53:45.255350 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ee56420_1b4d_4898_97db_d05756b9bb72.slice/crio-8cf63449286cd1d2c6c2eb97a5ba98bafc6693dcdee03918891220dda3f307a3 WatchSource:0}: Error finding container 8cf63449286cd1d2c6c2eb97a5ba98bafc6693dcdee03918891220dda3f307a3: Status 404 returned error can't find the container with id 8cf63449286cd1d2c6c2eb97a5ba98bafc6693dcdee03918891220dda3f307a3 Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.306961 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.307009 5014 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.307021 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1dde17-8b85-45c0-bef3-a9439be5632e-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.307032 5014 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8b1dde17-8b85-45c0-bef3-a9439be5632e-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.307043 5014 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.416329 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6cbc78cbb4-6wlp7"] Feb 28 04:53:45 crc kubenswrapper[5014]: W0228 04:53:45.463884 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c0a6572_64f2_488b_9533_c04957535d16.slice/crio-020aea1ec164a2e867ac33a84913430eaec59754b31bf11d45cd43768fa43064 WatchSource:0}: Error finding container 020aea1ec164a2e867ac33a84913430eaec59754b31bf11d45cd43768fa43064: Status 404 returned error can't find the container with id 020aea1ec164a2e867ac33a84913430eaec59754b31bf11d45cd43768fa43064 Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.465306 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-cmvws"] Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.487869 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"1e12dde4-3f6a-4ddc-b8bc-385ccc197453","Type":"ContainerDied","Data":"3815023cd78b239a5aafc0f6a33abe3ebc69ea49b153df0bc3f82c37c62a76d1"} Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.487939 5014 scope.go:117] "RemoveContainer" containerID="f5a8424c7e572f058f25fc8dbe43473333257942cc0b83a235f7502a1d6af6b5" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.488181 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.491005 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-kgdjz" event={"ID":"8b1dde17-8b85-45c0-bef3-a9439be5632e","Type":"ContainerDied","Data":"2dbaa08081bda4d23cae5a7e1258718b564bed8bbd9ac1a8e7c8d5722e782918"} Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.491136 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-kgdjz" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.494256 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cmvws" event={"ID":"8c0a6572-64f2-488b-9533-c04957535d16","Type":"ContainerStarted","Data":"020aea1ec164a2e867ac33a84913430eaec59754b31bf11d45cd43768fa43064"} Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.499176 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.499423 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3","Type":"ContainerDied","Data":"6c74099feae3d9689ca72a0316e43b21ffe1795fa5c5b0790de631eb9a57e642"} Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.500776 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cbc78cbb4-6wlp7" event={"ID":"80e6122e-74aa-4ee6-a7a3-4af495cb55b7","Type":"ContainerStarted","Data":"ccddd8be70135de8e3d35c92256bc91174f8141e4ba7beff56dee84bd7a7ece3"} Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.502019 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2d99d0c-9a87-4d80-8105-5c86158f6770","Type":"ContainerStarted","Data":"5285b11c7f63ea45c0b337d406c4345d1d6dd50a696f0fc66b1291c91ecf9739"} Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.503933 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c9c88866d-6m8lj" event={"ID":"6ee56420-1b4d-4898-97db-d05756b9bb72","Type":"ContainerStarted","Data":"8cf63449286cd1d2c6c2eb97a5ba98bafc6693dcdee03918891220dda3f307a3"} Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.514110 5014 scope.go:117] "RemoveContainer" containerID="76be9aea2ff31ef982d06969a0d7098ae12d18a9057baaa82d3b046c794b6ee7" Feb 28 04:53:45 crc kubenswrapper[5014]: E0228 04:53:45.514316 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-c9b9j" podUID="1688b2e2-1aaf-49e0-8414-0f12bb079aba" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.545846 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-kgdjz"] Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.558972 5014 scope.go:117] "RemoveContainer" containerID="b57907d1126245cbfcac823eeb4015387e57afb28aabba87c1cf311a841e1879" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.566431 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-kgdjz"] Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.585028 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.598479 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.599964 5014 scope.go:117] "RemoveContainer" containerID="8189ca449b58c04c5146ce79f4860ca822744669d903d6eb3bd5c6f0130218b8" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.611768 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.621883 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.628160 5014 scope.go:117] "RemoveContainer" containerID="30385ce18f5065cf1204f32a63834b563f809d5116e2abdb1b2c7059356af00b" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.628321 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 04:53:45 crc kubenswrapper[5014]: E0228 04:53:45.628711 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3" containerName="glance-log" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.628729 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3" containerName="glance-log" Feb 28 04:53:45 crc kubenswrapper[5014]: E0228 04:53:45.628740 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e12dde4-3f6a-4ddc-b8bc-385ccc197453" containerName="glance-log" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.628747 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e12dde4-3f6a-4ddc-b8bc-385ccc197453" containerName="glance-log" Feb 28 04:53:45 crc kubenswrapper[5014]: E0228 04:53:45.628761 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1dde17-8b85-45c0-bef3-a9439be5632e" containerName="dnsmasq-dns" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.628767 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1dde17-8b85-45c0-bef3-a9439be5632e" containerName="dnsmasq-dns" Feb 28 04:53:45 crc kubenswrapper[5014]: E0228 04:53:45.628788 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3" containerName="glance-httpd" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.628794 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3" containerName="glance-httpd" Feb 28 04:53:45 crc kubenswrapper[5014]: E0228 04:53:45.628823 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1dde17-8b85-45c0-bef3-a9439be5632e" containerName="init" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.628830 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1dde17-8b85-45c0-bef3-a9439be5632e" containerName="init" Feb 28 04:53:45 crc kubenswrapper[5014]: E0228 04:53:45.628843 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e12dde4-3f6a-4ddc-b8bc-385ccc197453" containerName="glance-httpd" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.628850 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e12dde4-3f6a-4ddc-b8bc-385ccc197453" containerName="glance-httpd" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.629017 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e12dde4-3f6a-4ddc-b8bc-385ccc197453" containerName="glance-httpd" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.629043 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b1dde17-8b85-45c0-bef3-a9439be5632e" containerName="dnsmasq-dns" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.629059 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e12dde4-3f6a-4ddc-b8bc-385ccc197453" containerName="glance-log" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.629074 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3" containerName="glance-httpd" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.629086 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3" containerName="glance-log" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.630299 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.632692 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-qvwbm" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.633041 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.633352 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.634763 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.640462 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.642823 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.651642 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.651830 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.656340 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.668693 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.706629 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.706712 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.718801 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6f52a80-76bd-4c20-a619-2926065d7824-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.718870 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6f52a80-76bd-4c20-a619-2926065d7824-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.718942 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a0fe1f-df1b-44ad-bab1-71610e650357-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.718959 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6f52a80-76bd-4c20-a619-2926065d7824-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.719029 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4a0fe1f-df1b-44ad-bab1-71610e650357-scripts\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.719083 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.719122 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6f52a80-76bd-4c20-a619-2926065d7824-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.719178 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmzs7\" (UniqueName: \"kubernetes.io/projected/f4a0fe1f-df1b-44ad-bab1-71610e650357-kube-api-access-dmzs7\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.719200 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.719216 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4a0fe1f-df1b-44ad-bab1-71610e650357-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.719397 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xlbl\" (UniqueName: \"kubernetes.io/projected/d6f52a80-76bd-4c20-a619-2926065d7824-kube-api-access-9xlbl\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.719440 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4a0fe1f-df1b-44ad-bab1-71610e650357-logs\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.719473 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6f52a80-76bd-4c20-a619-2926065d7824-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.719508 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6f52a80-76bd-4c20-a619-2926065d7824-logs\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.719532 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4a0fe1f-df1b-44ad-bab1-71610e650357-config-data\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.719559 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f4a0fe1f-df1b-44ad-bab1-71610e650357-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.775975 5014 scope.go:117] "RemoveContainer" containerID="3cd7d471e585c59b7642d41b1b51aaf5dc3ea9903eb5aed1127b7cfbd18c5f7f" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.821657 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a0fe1f-df1b-44ad-bab1-71610e650357-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.821705 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6f52a80-76bd-4c20-a619-2926065d7824-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.821729 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4a0fe1f-df1b-44ad-bab1-71610e650357-scripts\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.821766 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.821800 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6f52a80-76bd-4c20-a619-2926065d7824-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.821842 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmzs7\" (UniqueName: \"kubernetes.io/projected/f4a0fe1f-df1b-44ad-bab1-71610e650357-kube-api-access-dmzs7\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.821866 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.821882 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4a0fe1f-df1b-44ad-bab1-71610e650357-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.821903 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xlbl\" (UniqueName: \"kubernetes.io/projected/d6f52a80-76bd-4c20-a619-2926065d7824-kube-api-access-9xlbl\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.821925 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4a0fe1f-df1b-44ad-bab1-71610e650357-logs\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.821947 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6f52a80-76bd-4c20-a619-2926065d7824-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.821977 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6f52a80-76bd-4c20-a619-2926065d7824-logs\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.821994 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4a0fe1f-df1b-44ad-bab1-71610e650357-config-data\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.822016 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f4a0fe1f-df1b-44ad-bab1-71610e650357-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.822044 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6f52a80-76bd-4c20-a619-2926065d7824-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.822063 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6f52a80-76bd-4c20-a619-2926065d7824-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.823053 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f4a0fe1f-df1b-44ad-bab1-71610e650357-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.823056 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6f52a80-76bd-4c20-a619-2926065d7824-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.823395 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4a0fe1f-df1b-44ad-bab1-71610e650357-logs\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.823427 5014 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.823598 5014 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.824376 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6f52a80-76bd-4c20-a619-2926065d7824-logs\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.831469 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6f52a80-76bd-4c20-a619-2926065d7824-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.832002 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6f52a80-76bd-4c20-a619-2926065d7824-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.840176 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6f52a80-76bd-4c20-a619-2926065d7824-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.842746 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4a0fe1f-df1b-44ad-bab1-71610e650357-scripts\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.843752 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4a0fe1f-df1b-44ad-bab1-71610e650357-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.845026 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a0fe1f-df1b-44ad-bab1-71610e650357-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.845856 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6f52a80-76bd-4c20-a619-2926065d7824-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.847626 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmzs7\" (UniqueName: \"kubernetes.io/projected/f4a0fe1f-df1b-44ad-bab1-71610e650357-kube-api-access-dmzs7\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.851691 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xlbl\" (UniqueName: \"kubernetes.io/projected/d6f52a80-76bd-4c20-a619-2926065d7824-kube-api-access-9xlbl\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.854845 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4a0fe1f-df1b-44ad-bab1-71610e650357-config-data\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.880870 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.888272 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.967868 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 04:53:45 crc kubenswrapper[5014]: I0228 04:53:45.979294 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 04:53:46 crc kubenswrapper[5014]: I0228 04:53:46.182367 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e12dde4-3f6a-4ddc-b8bc-385ccc197453" path="/var/lib/kubelet/pods/1e12dde4-3f6a-4ddc-b8bc-385ccc197453/volumes" Feb 28 04:53:46 crc kubenswrapper[5014]: I0228 04:53:46.183574 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b1dde17-8b85-45c0-bef3-a9439be5632e" path="/var/lib/kubelet/pods/8b1dde17-8b85-45c0-bef3-a9439be5632e/volumes" Feb 28 04:53:46 crc kubenswrapper[5014]: I0228 04:53:46.184614 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3" path="/var/lib/kubelet/pods/ee3ace82-5482-4fd0-9e7c-ea9b67d7fdb3/volumes" Feb 28 04:53:46 crc kubenswrapper[5014]: I0228 04:53:46.538080 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c9c88866d-6m8lj" event={"ID":"6ee56420-1b4d-4898-97db-d05756b9bb72","Type":"ContainerStarted","Data":"3c205285ecdd23b64f7313a3ce9ea9f6f55635ce3c5463d0549a7e7462af4c0b"} Feb 28 04:53:46 crc kubenswrapper[5014]: I0228 04:53:46.546705 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cmvws" event={"ID":"8c0a6572-64f2-488b-9533-c04957535d16","Type":"ContainerStarted","Data":"2e8c9e659725b7c3cfeb4a686cc1ebfeb6a49d0f4102098b4925a7e7d1aa3aaa"} Feb 28 04:53:46 crc kubenswrapper[5014]: I0228 04:53:46.548914 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cbc78cbb4-6wlp7" event={"ID":"80e6122e-74aa-4ee6-a7a3-4af495cb55b7","Type":"ContainerStarted","Data":"76ed047bf90263787959b88328e36777c017c0f8dd1ff494685dddd105e6d8cd"} Feb 28 04:53:46 crc kubenswrapper[5014]: I0228 04:53:46.566154 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-cmvws" podStartSLOduration=3.565782465 podStartE2EDuration="3.565782465s" podCreationTimestamp="2026-02-28 04:53:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:53:46.565320693 +0000 UTC m=+1215.235446613" watchObservedRunningTime="2026-02-28 04:53:46.565782465 +0000 UTC m=+1215.235908375" Feb 28 04:53:46 crc kubenswrapper[5014]: I0228 04:53:46.606168 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 04:53:46 crc kubenswrapper[5014]: I0228 04:53:46.662356 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 04:53:46 crc kubenswrapper[5014]: W0228 04:53:46.698628 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4a0fe1f_df1b_44ad_bab1_71610e650357.slice/crio-c535a16d4c28041833f39ce7c2c0d3763a658c124acf92cf428af777db5241c4 WatchSource:0}: Error finding container c535a16d4c28041833f39ce7c2c0d3763a658c124acf92cf428af777db5241c4: Status 404 returned error can't find the container with id c535a16d4c28041833f39ce7c2c0d3763a658c124acf92cf428af777db5241c4 Feb 28 04:53:46 crc kubenswrapper[5014]: W0228 04:53:46.699929 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6f52a80_76bd_4c20_a619_2926065d7824.slice/crio-003a808f8f5dfadb700f4924a0a0bb0112b97b3a4ed7499f6d19db76c7f108de WatchSource:0}: Error finding container 003a808f8f5dfadb700f4924a0a0bb0112b97b3a4ed7499f6d19db76c7f108de: Status 404 returned error can't find the container with id 003a808f8f5dfadb700f4924a0a0bb0112b97b3a4ed7499f6d19db76c7f108de Feb 28 04:53:47 crc kubenswrapper[5014]: I0228 04:53:47.566613 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2d99d0c-9a87-4d80-8105-5c86158f6770","Type":"ContainerStarted","Data":"df0a00a59040905d57860047e9263d3015a68c94ca415d4eb5741d25b71aefc0"} Feb 28 04:53:47 crc kubenswrapper[5014]: I0228 04:53:47.579020 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6f52a80-76bd-4c20-a619-2926065d7824","Type":"ContainerStarted","Data":"3d26f8527acbd6672331b9c2d4d418035dbc87e4072030173ebbbdfee80972e7"} Feb 28 04:53:47 crc kubenswrapper[5014]: I0228 04:53:47.579077 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6f52a80-76bd-4c20-a619-2926065d7824","Type":"ContainerStarted","Data":"003a808f8f5dfadb700f4924a0a0bb0112b97b3a4ed7499f6d19db76c7f108de"} Feb 28 04:53:47 crc kubenswrapper[5014]: I0228 04:53:47.581768 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f4a0fe1f-df1b-44ad-bab1-71610e650357","Type":"ContainerStarted","Data":"7e7337524f7dc89d1658cce858b13c9109af9e8fddadf430a18536cc94ddde08"} Feb 28 04:53:47 crc kubenswrapper[5014]: I0228 04:53:47.581835 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f4a0fe1f-df1b-44ad-bab1-71610e650357","Type":"ContainerStarted","Data":"c535a16d4c28041833f39ce7c2c0d3763a658c124acf92cf428af777db5241c4"} Feb 28 04:53:47 crc kubenswrapper[5014]: I0228 04:53:47.585089 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-c9c88866d-6m8lj" event={"ID":"6ee56420-1b4d-4898-97db-d05756b9bb72","Type":"ContainerStarted","Data":"8f11cd6ec4d60acdb456cd3726ce839f2f000847834b23ed9f5727edbe304868"} Feb 28 04:53:47 crc kubenswrapper[5014]: I0228 04:53:47.590388 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cbc78cbb4-6wlp7" event={"ID":"80e6122e-74aa-4ee6-a7a3-4af495cb55b7","Type":"ContainerStarted","Data":"e0ca2cc31bef32f1a8996357e09afc4440944891b9575e4c249702b104fa3fa9"} Feb 28 04:53:47 crc kubenswrapper[5014]: I0228 04:53:47.609923 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-c9c88866d-6m8lj" podStartSLOduration=23.090913528 podStartE2EDuration="23.609901005s" podCreationTimestamp="2026-02-28 04:53:24 +0000 UTC" firstStartedPulling="2026-02-28 04:53:45.25743707 +0000 UTC m=+1213.927562980" lastFinishedPulling="2026-02-28 04:53:45.776424547 +0000 UTC m=+1214.446550457" observedRunningTime="2026-02-28 04:53:47.607834929 +0000 UTC m=+1216.277960839" watchObservedRunningTime="2026-02-28 04:53:47.609901005 +0000 UTC m=+1216.280026915" Feb 28 04:53:47 crc kubenswrapper[5014]: I0228 04:53:47.646855 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-6cbc78cbb4-6wlp7" podStartSLOduration=24.193838155 podStartE2EDuration="24.646837171s" podCreationTimestamp="2026-02-28 04:53:23 +0000 UTC" firstStartedPulling="2026-02-28 04:53:45.433319914 +0000 UTC m=+1214.103445824" lastFinishedPulling="2026-02-28 04:53:45.88631893 +0000 UTC m=+1214.556444840" observedRunningTime="2026-02-28 04:53:47.635370672 +0000 UTC m=+1216.305496582" watchObservedRunningTime="2026-02-28 04:53:47.646837171 +0000 UTC m=+1216.316963081" Feb 28 04:53:48 crc kubenswrapper[5014]: I0228 04:53:48.603416 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f4a0fe1f-df1b-44ad-bab1-71610e650357","Type":"ContainerStarted","Data":"25d5dbc4b4de187af03422ccdf5a83f5e3a3d14e5bc1e594a17da340080d03de"} Feb 28 04:53:48 crc kubenswrapper[5014]: I0228 04:53:48.606493 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6f52a80-76bd-4c20-a619-2926065d7824","Type":"ContainerStarted","Data":"73de03346cc98ab1dfd17fcf7da1f55cc345dd690c4d5ca09632c5e9607a6de4"} Feb 28 04:53:48 crc kubenswrapper[5014]: I0228 04:53:48.626982 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.626959594 podStartE2EDuration="3.626959594s" podCreationTimestamp="2026-02-28 04:53:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:53:48.624318373 +0000 UTC m=+1217.294444273" watchObservedRunningTime="2026-02-28 04:53:48.626959594 +0000 UTC m=+1217.297085504" Feb 28 04:53:48 crc kubenswrapper[5014]: I0228 04:53:48.652142 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.652123143 podStartE2EDuration="3.652123143s" podCreationTimestamp="2026-02-28 04:53:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:53:48.648790902 +0000 UTC m=+1217.318916812" watchObservedRunningTime="2026-02-28 04:53:48.652123143 +0000 UTC m=+1217.322249053" Feb 28 04:53:49 crc kubenswrapper[5014]: I0228 04:53:49.634321 5014 generic.go:334] "Generic (PLEG): container finished" podID="8c0a6572-64f2-488b-9533-c04957535d16" containerID="2e8c9e659725b7c3cfeb4a686cc1ebfeb6a49d0f4102098b4925a7e7d1aa3aaa" exitCode=0 Feb 28 04:53:49 crc kubenswrapper[5014]: I0228 04:53:49.634442 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cmvws" event={"ID":"8c0a6572-64f2-488b-9533-c04957535d16","Type":"ContainerDied","Data":"2e8c9e659725b7c3cfeb4a686cc1ebfeb6a49d0f4102098b4925a7e7d1aa3aaa"} Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.044658 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cmvws" Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.081836 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-credential-keys\") pod \"8c0a6572-64f2-488b-9533-c04957535d16\" (UID: \"8c0a6572-64f2-488b-9533-c04957535d16\") " Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.082054 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-scripts\") pod \"8c0a6572-64f2-488b-9533-c04957535d16\" (UID: \"8c0a6572-64f2-488b-9533-c04957535d16\") " Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.082155 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-combined-ca-bundle\") pod \"8c0a6572-64f2-488b-9533-c04957535d16\" (UID: \"8c0a6572-64f2-488b-9533-c04957535d16\") " Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.082236 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-config-data\") pod \"8c0a6572-64f2-488b-9533-c04957535d16\" (UID: \"8c0a6572-64f2-488b-9533-c04957535d16\") " Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.082320 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-fernet-keys\") pod \"8c0a6572-64f2-488b-9533-c04957535d16\" (UID: \"8c0a6572-64f2-488b-9533-c04957535d16\") " Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.082412 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbvtt\" (UniqueName: \"kubernetes.io/projected/8c0a6572-64f2-488b-9533-c04957535d16-kube-api-access-mbvtt\") pod \"8c0a6572-64f2-488b-9533-c04957535d16\" (UID: \"8c0a6572-64f2-488b-9533-c04957535d16\") " Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.087898 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8c0a6572-64f2-488b-9533-c04957535d16" (UID: "8c0a6572-64f2-488b-9533-c04957535d16"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.092310 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8c0a6572-64f2-488b-9533-c04957535d16" (UID: "8c0a6572-64f2-488b-9533-c04957535d16"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.092980 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c0a6572-64f2-488b-9533-c04957535d16-kube-api-access-mbvtt" (OuterVolumeSpecName: "kube-api-access-mbvtt") pod "8c0a6572-64f2-488b-9533-c04957535d16" (UID: "8c0a6572-64f2-488b-9533-c04957535d16"). InnerVolumeSpecName "kube-api-access-mbvtt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.108174 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-scripts" (OuterVolumeSpecName: "scripts") pod "8c0a6572-64f2-488b-9533-c04957535d16" (UID: "8c0a6572-64f2-488b-9533-c04957535d16"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.118506 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-config-data" (OuterVolumeSpecName: "config-data") pod "8c0a6572-64f2-488b-9533-c04957535d16" (UID: "8c0a6572-64f2-488b-9533-c04957535d16"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.153729 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c0a6572-64f2-488b-9533-c04957535d16" (UID: "8c0a6572-64f2-488b-9533-c04957535d16"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.184418 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.184464 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.184484 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.184496 5014 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.184510 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbvtt\" (UniqueName: \"kubernetes.io/projected/8c0a6572-64f2-488b-9533-c04957535d16-kube-api-access-mbvtt\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.184523 5014 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8c0a6572-64f2-488b-9533-c04957535d16-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.519228 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.519586 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.527951 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.527995 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.687461 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2d99d0c-9a87-4d80-8105-5c86158f6770","Type":"ContainerStarted","Data":"5c6b8b923b2e0f76fe1b4cbbe0d395ecdb86b62f7a043c552751f388c76968e3"} Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.691792 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5tgzd" event={"ID":"c5e88418-60bd-44ee-8272-245ee92460c6","Type":"ContainerStarted","Data":"af276ef5b9532b8cc1167c9b701c2b045d70ec21a6d0476994c1cb54664be2e8"} Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.699838 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-cmvws" event={"ID":"8c0a6572-64f2-488b-9533-c04957535d16","Type":"ContainerDied","Data":"020aea1ec164a2e867ac33a84913430eaec59754b31bf11d45cd43768fa43064"} Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.699906 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="020aea1ec164a2e867ac33a84913430eaec59754b31bf11d45cd43768fa43064" Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.700011 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-cmvws" Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.704172 5014 generic.go:334] "Generic (PLEG): container finished" podID="f1736698-f0bd-493f-a03e-dc1957763f1a" containerID="7943bd947cd43ddf77c62e9460ccfabe22c48e60eb5f82fe013071110b88514c" exitCode=0 Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.704227 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7sqlf" event={"ID":"f1736698-f0bd-493f-a03e-dc1957763f1a","Type":"ContainerDied","Data":"7943bd947cd43ddf77c62e9460ccfabe22c48e60eb5f82fe013071110b88514c"} Feb 28 04:53:54 crc kubenswrapper[5014]: I0228 04:53:54.711663 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-5tgzd" podStartSLOduration=2.531242011 podStartE2EDuration="43.711642335s" podCreationTimestamp="2026-02-28 04:53:11 +0000 UTC" firstStartedPulling="2026-02-28 04:53:12.844743556 +0000 UTC m=+1181.514869466" lastFinishedPulling="2026-02-28 04:53:54.02514386 +0000 UTC m=+1222.695269790" observedRunningTime="2026-02-28 04:53:54.70779795 +0000 UTC m=+1223.377923870" watchObservedRunningTime="2026-02-28 04:53:54.711642335 +0000 UTC m=+1223.381768245" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.187647 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-799995d5cd-97xmn"] Feb 28 04:53:55 crc kubenswrapper[5014]: E0228 04:53:55.188109 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c0a6572-64f2-488b-9533-c04957535d16" containerName="keystone-bootstrap" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.188365 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c0a6572-64f2-488b-9533-c04957535d16" containerName="keystone-bootstrap" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.188603 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c0a6572-64f2-488b-9533-c04957535d16" containerName="keystone-bootstrap" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.189270 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.205957 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-799995d5cd-97xmn"] Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.242758 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zmpcc" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.242922 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.243053 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.243175 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.243304 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.243437 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.245872 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42bdx\" (UniqueName: \"kubernetes.io/projected/2371f935-6c31-4088-ad79-e3dadd298f40-kube-api-access-42bdx\") pod \"keystone-799995d5cd-97xmn\" (UID: \"2371f935-6c31-4088-ad79-e3dadd298f40\") " pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.245930 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2371f935-6c31-4088-ad79-e3dadd298f40-combined-ca-bundle\") pod \"keystone-799995d5cd-97xmn\" (UID: \"2371f935-6c31-4088-ad79-e3dadd298f40\") " pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.245981 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2371f935-6c31-4088-ad79-e3dadd298f40-config-data\") pod \"keystone-799995d5cd-97xmn\" (UID: \"2371f935-6c31-4088-ad79-e3dadd298f40\") " pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.246023 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2371f935-6c31-4088-ad79-e3dadd298f40-credential-keys\") pod \"keystone-799995d5cd-97xmn\" (UID: \"2371f935-6c31-4088-ad79-e3dadd298f40\") " pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.246047 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2371f935-6c31-4088-ad79-e3dadd298f40-scripts\") pod \"keystone-799995d5cd-97xmn\" (UID: \"2371f935-6c31-4088-ad79-e3dadd298f40\") " pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.246090 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2371f935-6c31-4088-ad79-e3dadd298f40-internal-tls-certs\") pod \"keystone-799995d5cd-97xmn\" (UID: \"2371f935-6c31-4088-ad79-e3dadd298f40\") " pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.246104 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2371f935-6c31-4088-ad79-e3dadd298f40-fernet-keys\") pod \"keystone-799995d5cd-97xmn\" (UID: \"2371f935-6c31-4088-ad79-e3dadd298f40\") " pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.246147 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2371f935-6c31-4088-ad79-e3dadd298f40-public-tls-certs\") pod \"keystone-799995d5cd-97xmn\" (UID: \"2371f935-6c31-4088-ad79-e3dadd298f40\") " pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.347722 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2371f935-6c31-4088-ad79-e3dadd298f40-scripts\") pod \"keystone-799995d5cd-97xmn\" (UID: \"2371f935-6c31-4088-ad79-e3dadd298f40\") " pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.347862 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2371f935-6c31-4088-ad79-e3dadd298f40-internal-tls-certs\") pod \"keystone-799995d5cd-97xmn\" (UID: \"2371f935-6c31-4088-ad79-e3dadd298f40\") " pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.347889 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2371f935-6c31-4088-ad79-e3dadd298f40-fernet-keys\") pod \"keystone-799995d5cd-97xmn\" (UID: \"2371f935-6c31-4088-ad79-e3dadd298f40\") " pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.348779 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2371f935-6c31-4088-ad79-e3dadd298f40-public-tls-certs\") pod \"keystone-799995d5cd-97xmn\" (UID: \"2371f935-6c31-4088-ad79-e3dadd298f40\") " pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.348869 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42bdx\" (UniqueName: \"kubernetes.io/projected/2371f935-6c31-4088-ad79-e3dadd298f40-kube-api-access-42bdx\") pod \"keystone-799995d5cd-97xmn\" (UID: \"2371f935-6c31-4088-ad79-e3dadd298f40\") " pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.348908 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2371f935-6c31-4088-ad79-e3dadd298f40-combined-ca-bundle\") pod \"keystone-799995d5cd-97xmn\" (UID: \"2371f935-6c31-4088-ad79-e3dadd298f40\") " pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.348967 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2371f935-6c31-4088-ad79-e3dadd298f40-config-data\") pod \"keystone-799995d5cd-97xmn\" (UID: \"2371f935-6c31-4088-ad79-e3dadd298f40\") " pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.349006 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2371f935-6c31-4088-ad79-e3dadd298f40-credential-keys\") pod \"keystone-799995d5cd-97xmn\" (UID: \"2371f935-6c31-4088-ad79-e3dadd298f40\") " pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.353477 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2371f935-6c31-4088-ad79-e3dadd298f40-combined-ca-bundle\") pod \"keystone-799995d5cd-97xmn\" (UID: \"2371f935-6c31-4088-ad79-e3dadd298f40\") " pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.354182 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2371f935-6c31-4088-ad79-e3dadd298f40-config-data\") pod \"keystone-799995d5cd-97xmn\" (UID: \"2371f935-6c31-4088-ad79-e3dadd298f40\") " pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.356241 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2371f935-6c31-4088-ad79-e3dadd298f40-public-tls-certs\") pod \"keystone-799995d5cd-97xmn\" (UID: \"2371f935-6c31-4088-ad79-e3dadd298f40\") " pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.356340 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/2371f935-6c31-4088-ad79-e3dadd298f40-fernet-keys\") pod \"keystone-799995d5cd-97xmn\" (UID: \"2371f935-6c31-4088-ad79-e3dadd298f40\") " pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.356446 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2371f935-6c31-4088-ad79-e3dadd298f40-internal-tls-certs\") pod \"keystone-799995d5cd-97xmn\" (UID: \"2371f935-6c31-4088-ad79-e3dadd298f40\") " pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.357681 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/2371f935-6c31-4088-ad79-e3dadd298f40-credential-keys\") pod \"keystone-799995d5cd-97xmn\" (UID: \"2371f935-6c31-4088-ad79-e3dadd298f40\") " pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.357707 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2371f935-6c31-4088-ad79-e3dadd298f40-scripts\") pod \"keystone-799995d5cd-97xmn\" (UID: \"2371f935-6c31-4088-ad79-e3dadd298f40\") " pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.373628 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42bdx\" (UniqueName: \"kubernetes.io/projected/2371f935-6c31-4088-ad79-e3dadd298f40-kube-api-access-42bdx\") pod \"keystone-799995d5cd-97xmn\" (UID: \"2371f935-6c31-4088-ad79-e3dadd298f40\") " pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.560632 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.968623 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.969124 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.980478 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.980538 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 28 04:53:55 crc kubenswrapper[5014]: I0228 04:53:55.995642 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.025905 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.039779 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.054844 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.057141 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-799995d5cd-97xmn"] Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.181623 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7sqlf" Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.267680 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1736698-f0bd-493f-a03e-dc1957763f1a-combined-ca-bundle\") pod \"f1736698-f0bd-493f-a03e-dc1957763f1a\" (UID: \"f1736698-f0bd-493f-a03e-dc1957763f1a\") " Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.267783 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/f1736698-f0bd-493f-a03e-dc1957763f1a-config\") pod \"f1736698-f0bd-493f-a03e-dc1957763f1a\" (UID: \"f1736698-f0bd-493f-a03e-dc1957763f1a\") " Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.267865 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkmkc\" (UniqueName: \"kubernetes.io/projected/f1736698-f0bd-493f-a03e-dc1957763f1a-kube-api-access-zkmkc\") pod \"f1736698-f0bd-493f-a03e-dc1957763f1a\" (UID: \"f1736698-f0bd-493f-a03e-dc1957763f1a\") " Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.275983 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1736698-f0bd-493f-a03e-dc1957763f1a-kube-api-access-zkmkc" (OuterVolumeSpecName: "kube-api-access-zkmkc") pod "f1736698-f0bd-493f-a03e-dc1957763f1a" (UID: "f1736698-f0bd-493f-a03e-dc1957763f1a"). InnerVolumeSpecName "kube-api-access-zkmkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.297961 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1736698-f0bd-493f-a03e-dc1957763f1a-config" (OuterVolumeSpecName: "config") pod "f1736698-f0bd-493f-a03e-dc1957763f1a" (UID: "f1736698-f0bd-493f-a03e-dc1957763f1a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.306975 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1736698-f0bd-493f-a03e-dc1957763f1a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f1736698-f0bd-493f-a03e-dc1957763f1a" (UID: "f1736698-f0bd-493f-a03e-dc1957763f1a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.370361 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1736698-f0bd-493f-a03e-dc1957763f1a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.370399 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/f1736698-f0bd-493f-a03e-dc1957763f1a-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.370410 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkmkc\" (UniqueName: \"kubernetes.io/projected/f1736698-f0bd-493f-a03e-dc1957763f1a-kube-api-access-zkmkc\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.729992 5014 generic.go:334] "Generic (PLEG): container finished" podID="c5e88418-60bd-44ee-8272-245ee92460c6" containerID="af276ef5b9532b8cc1167c9b701c2b045d70ec21a6d0476994c1cb54664be2e8" exitCode=0 Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.730185 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5tgzd" event={"ID":"c5e88418-60bd-44ee-8272-245ee92460c6","Type":"ContainerDied","Data":"af276ef5b9532b8cc1167c9b701c2b045d70ec21a6d0476994c1cb54664be2e8"} Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.736864 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-799995d5cd-97xmn" event={"ID":"2371f935-6c31-4088-ad79-e3dadd298f40","Type":"ContainerStarted","Data":"d6ff4e4c43c5ae1303d2742c991a21c34b7fec38e15e2e9cfc235aeceb3fe901"} Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.736913 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-799995d5cd-97xmn" event={"ID":"2371f935-6c31-4088-ad79-e3dadd298f40","Type":"ContainerStarted","Data":"18fee66d5cb996068825f126729cad33b13459e16504e2e8ba1c630f8f1d295b"} Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.737018 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.757221 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-7sqlf" event={"ID":"f1736698-f0bd-493f-a03e-dc1957763f1a","Type":"ContainerDied","Data":"9c318295dbc17d749f58aa83e052a60937df15e71491d3becbe60491b77ea5b7"} Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.757262 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c318295dbc17d749f58aa83e052a60937df15e71491d3becbe60491b77ea5b7" Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.757236 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-7sqlf" Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.761119 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wxq9x" event={"ID":"57f91015-35f5-486c-a88c-0a90f76724e5","Type":"ContainerStarted","Data":"888a50ebc86cbb9c2fa123861d89d1e67c733c3751e4c8a0e65f5b6ff951cd7e"} Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.761156 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.762111 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.762163 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.762174 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.787265 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-799995d5cd-97xmn" podStartSLOduration=1.787247773 podStartE2EDuration="1.787247773s" podCreationTimestamp="2026-02-28 04:53:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:53:56.766028721 +0000 UTC m=+1225.436154631" watchObservedRunningTime="2026-02-28 04:53:56.787247773 +0000 UTC m=+1225.457373673" Feb 28 04:53:56 crc kubenswrapper[5014]: I0228 04:53:56.804567 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-wxq9x" podStartSLOduration=2.9848601439999998 podStartE2EDuration="45.80454983s" podCreationTimestamp="2026-02-28 04:53:11 +0000 UTC" firstStartedPulling="2026-02-28 04:53:12.860019777 +0000 UTC m=+1181.530145687" lastFinishedPulling="2026-02-28 04:53:55.679709463 +0000 UTC m=+1224.349835373" observedRunningTime="2026-02-28 04:53:56.791148258 +0000 UTC m=+1225.461274168" watchObservedRunningTime="2026-02-28 04:53:56.80454983 +0000 UTC m=+1225.474675740" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.007433 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-zgd7q"] Feb 28 04:53:57 crc kubenswrapper[5014]: E0228 04:53:57.007846 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1736698-f0bd-493f-a03e-dc1957763f1a" containerName="neutron-db-sync" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.007863 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1736698-f0bd-493f-a03e-dc1957763f1a" containerName="neutron-db-sync" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.008051 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1736698-f0bd-493f-a03e-dc1957763f1a" containerName="neutron-db-sync" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.008940 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.059875 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-zgd7q"] Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.192672 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-zgd7q\" (UID: \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\") " pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.192768 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-zgd7q\" (UID: \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\") " pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.192866 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bpj4\" (UniqueName: \"kubernetes.io/projected/37d803e4-987c-47c4-b9d6-de67dc94cc6a-kube-api-access-6bpj4\") pod \"dnsmasq-dns-55f844cf75-zgd7q\" (UID: \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\") " pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.192886 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-dns-svc\") pod \"dnsmasq-dns-55f844cf75-zgd7q\" (UID: \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\") " pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.192915 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-zgd7q\" (UID: \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\") " pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.192933 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-config\") pod \"dnsmasq-dns-55f844cf75-zgd7q\" (UID: \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\") " pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.226251 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-94f64597b-rtxdm"] Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.229294 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-94f64597b-rtxdm" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.234539 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.234751 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-nq8p7" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.234930 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.235229 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.249098 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-94f64597b-rtxdm"] Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.302784 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bpj4\" (UniqueName: \"kubernetes.io/projected/37d803e4-987c-47c4-b9d6-de67dc94cc6a-kube-api-access-6bpj4\") pod \"dnsmasq-dns-55f844cf75-zgd7q\" (UID: \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\") " pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.303767 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-dns-svc\") pod \"dnsmasq-dns-55f844cf75-zgd7q\" (UID: \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\") " pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.303878 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-zgd7q\" (UID: \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\") " pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.303953 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-config\") pod \"dnsmasq-dns-55f844cf75-zgd7q\" (UID: \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\") " pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.304047 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-zgd7q\" (UID: \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\") " pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.304159 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-zgd7q\" (UID: \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\") " pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.305037 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-zgd7q\" (UID: \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\") " pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.306629 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-dns-svc\") pod \"dnsmasq-dns-55f844cf75-zgd7q\" (UID: \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\") " pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.322477 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-zgd7q\" (UID: \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\") " pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.331159 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-zgd7q\" (UID: \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\") " pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.336448 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-config\") pod \"dnsmasq-dns-55f844cf75-zgd7q\" (UID: \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\") " pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.346693 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bpj4\" (UniqueName: \"kubernetes.io/projected/37d803e4-987c-47c4-b9d6-de67dc94cc6a-kube-api-access-6bpj4\") pod \"dnsmasq-dns-55f844cf75-zgd7q\" (UID: \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\") " pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.360209 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.409762 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/00ee4598-7f76-410b-8737-7086fd0b5aad-httpd-config\") pod \"neutron-94f64597b-rtxdm\" (UID: \"00ee4598-7f76-410b-8737-7086fd0b5aad\") " pod="openstack/neutron-94f64597b-rtxdm" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.409820 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/00ee4598-7f76-410b-8737-7086fd0b5aad-ovndb-tls-certs\") pod \"neutron-94f64597b-rtxdm\" (UID: \"00ee4598-7f76-410b-8737-7086fd0b5aad\") " pod="openstack/neutron-94f64597b-rtxdm" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.409865 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj6sb\" (UniqueName: \"kubernetes.io/projected/00ee4598-7f76-410b-8737-7086fd0b5aad-kube-api-access-bj6sb\") pod \"neutron-94f64597b-rtxdm\" (UID: \"00ee4598-7f76-410b-8737-7086fd0b5aad\") " pod="openstack/neutron-94f64597b-rtxdm" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.409891 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/00ee4598-7f76-410b-8737-7086fd0b5aad-config\") pod \"neutron-94f64597b-rtxdm\" (UID: \"00ee4598-7f76-410b-8737-7086fd0b5aad\") " pod="openstack/neutron-94f64597b-rtxdm" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.409911 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00ee4598-7f76-410b-8737-7086fd0b5aad-combined-ca-bundle\") pod \"neutron-94f64597b-rtxdm\" (UID: \"00ee4598-7f76-410b-8737-7086fd0b5aad\") " pod="openstack/neutron-94f64597b-rtxdm" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.517719 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/00ee4598-7f76-410b-8737-7086fd0b5aad-ovndb-tls-certs\") pod \"neutron-94f64597b-rtxdm\" (UID: \"00ee4598-7f76-410b-8737-7086fd0b5aad\") " pod="openstack/neutron-94f64597b-rtxdm" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.517765 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/00ee4598-7f76-410b-8737-7086fd0b5aad-httpd-config\") pod \"neutron-94f64597b-rtxdm\" (UID: \"00ee4598-7f76-410b-8737-7086fd0b5aad\") " pod="openstack/neutron-94f64597b-rtxdm" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.517825 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj6sb\" (UniqueName: \"kubernetes.io/projected/00ee4598-7f76-410b-8737-7086fd0b5aad-kube-api-access-bj6sb\") pod \"neutron-94f64597b-rtxdm\" (UID: \"00ee4598-7f76-410b-8737-7086fd0b5aad\") " pod="openstack/neutron-94f64597b-rtxdm" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.517854 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/00ee4598-7f76-410b-8737-7086fd0b5aad-config\") pod \"neutron-94f64597b-rtxdm\" (UID: \"00ee4598-7f76-410b-8737-7086fd0b5aad\") " pod="openstack/neutron-94f64597b-rtxdm" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.517878 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00ee4598-7f76-410b-8737-7086fd0b5aad-combined-ca-bundle\") pod \"neutron-94f64597b-rtxdm\" (UID: \"00ee4598-7f76-410b-8737-7086fd0b5aad\") " pod="openstack/neutron-94f64597b-rtxdm" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.532062 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/00ee4598-7f76-410b-8737-7086fd0b5aad-ovndb-tls-certs\") pod \"neutron-94f64597b-rtxdm\" (UID: \"00ee4598-7f76-410b-8737-7086fd0b5aad\") " pod="openstack/neutron-94f64597b-rtxdm" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.532820 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/00ee4598-7f76-410b-8737-7086fd0b5aad-config\") pod \"neutron-94f64597b-rtxdm\" (UID: \"00ee4598-7f76-410b-8737-7086fd0b5aad\") " pod="openstack/neutron-94f64597b-rtxdm" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.537683 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00ee4598-7f76-410b-8737-7086fd0b5aad-combined-ca-bundle\") pod \"neutron-94f64597b-rtxdm\" (UID: \"00ee4598-7f76-410b-8737-7086fd0b5aad\") " pod="openstack/neutron-94f64597b-rtxdm" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.539641 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/00ee4598-7f76-410b-8737-7086fd0b5aad-httpd-config\") pod \"neutron-94f64597b-rtxdm\" (UID: \"00ee4598-7f76-410b-8737-7086fd0b5aad\") " pod="openstack/neutron-94f64597b-rtxdm" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.543692 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj6sb\" (UniqueName: \"kubernetes.io/projected/00ee4598-7f76-410b-8737-7086fd0b5aad-kube-api-access-bj6sb\") pod \"neutron-94f64597b-rtxdm\" (UID: \"00ee4598-7f76-410b-8737-7086fd0b5aad\") " pod="openstack/neutron-94f64597b-rtxdm" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.563924 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-94f64597b-rtxdm" Feb 28 04:53:57 crc kubenswrapper[5014]: I0228 04:53:57.926080 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-zgd7q"] Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.153362 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5tgzd" Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.221445 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-94f64597b-rtxdm"] Feb 28 04:53:58 crc kubenswrapper[5014]: W0228 04:53:58.224827 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00ee4598_7f76_410b_8737_7086fd0b5aad.slice/crio-99559b79f3e0891e65229ee07b0dcd9604ff785dd3e7ad2357948e09b9210b0b WatchSource:0}: Error finding container 99559b79f3e0891e65229ee07b0dcd9604ff785dd3e7ad2357948e09b9210b0b: Status 404 returned error can't find the container with id 99559b79f3e0891e65229ee07b0dcd9604ff785dd3e7ad2357948e09b9210b0b Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.338064 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tll8\" (UniqueName: \"kubernetes.io/projected/c5e88418-60bd-44ee-8272-245ee92460c6-kube-api-access-8tll8\") pod \"c5e88418-60bd-44ee-8272-245ee92460c6\" (UID: \"c5e88418-60bd-44ee-8272-245ee92460c6\") " Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.338120 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e88418-60bd-44ee-8272-245ee92460c6-config-data\") pod \"c5e88418-60bd-44ee-8272-245ee92460c6\" (UID: \"c5e88418-60bd-44ee-8272-245ee92460c6\") " Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.338285 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5e88418-60bd-44ee-8272-245ee92460c6-logs\") pod \"c5e88418-60bd-44ee-8272-245ee92460c6\" (UID: \"c5e88418-60bd-44ee-8272-245ee92460c6\") " Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.338648 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5e88418-60bd-44ee-8272-245ee92460c6-logs" (OuterVolumeSpecName: "logs") pod "c5e88418-60bd-44ee-8272-245ee92460c6" (UID: "c5e88418-60bd-44ee-8272-245ee92460c6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.338684 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e88418-60bd-44ee-8272-245ee92460c6-combined-ca-bundle\") pod \"c5e88418-60bd-44ee-8272-245ee92460c6\" (UID: \"c5e88418-60bd-44ee-8272-245ee92460c6\") " Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.338765 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5e88418-60bd-44ee-8272-245ee92460c6-scripts\") pod \"c5e88418-60bd-44ee-8272-245ee92460c6\" (UID: \"c5e88418-60bd-44ee-8272-245ee92460c6\") " Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.339738 5014 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c5e88418-60bd-44ee-8272-245ee92460c6-logs\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.343502 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5e88418-60bd-44ee-8272-245ee92460c6-kube-api-access-8tll8" (OuterVolumeSpecName: "kube-api-access-8tll8") pod "c5e88418-60bd-44ee-8272-245ee92460c6" (UID: "c5e88418-60bd-44ee-8272-245ee92460c6"). InnerVolumeSpecName "kube-api-access-8tll8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.345394 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5e88418-60bd-44ee-8272-245ee92460c6-scripts" (OuterVolumeSpecName: "scripts") pod "c5e88418-60bd-44ee-8272-245ee92460c6" (UID: "c5e88418-60bd-44ee-8272-245ee92460c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.377671 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5e88418-60bd-44ee-8272-245ee92460c6-config-data" (OuterVolumeSpecName: "config-data") pod "c5e88418-60bd-44ee-8272-245ee92460c6" (UID: "c5e88418-60bd-44ee-8272-245ee92460c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.379493 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5e88418-60bd-44ee-8272-245ee92460c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c5e88418-60bd-44ee-8272-245ee92460c6" (UID: "c5e88418-60bd-44ee-8272-245ee92460c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.441676 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5e88418-60bd-44ee-8272-245ee92460c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.441721 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c5e88418-60bd-44ee-8272-245ee92460c6-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.441734 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tll8\" (UniqueName: \"kubernetes.io/projected/c5e88418-60bd-44ee-8272-245ee92460c6-kube-api-access-8tll8\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.441748 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c5e88418-60bd-44ee-8272-245ee92460c6-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.800379 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-c9b9j" event={"ID":"1688b2e2-1aaf-49e0-8414-0f12bb079aba","Type":"ContainerStarted","Data":"0ce85616c56a19ae32cdd705e5c4072145e7fad184d9161924b12988e54e9122"} Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.803954 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-94f64597b-rtxdm" event={"ID":"00ee4598-7f76-410b-8737-7086fd0b5aad","Type":"ContainerStarted","Data":"59d50015c9164b1e43e2b391d3dfa8a612b6ce89185cc136fcb117c164a01c45"} Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.803988 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-94f64597b-rtxdm" event={"ID":"00ee4598-7f76-410b-8737-7086fd0b5aad","Type":"ContainerStarted","Data":"99559b79f3e0891e65229ee07b0dcd9604ff785dd3e7ad2357948e09b9210b0b"} Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.805883 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-5tgzd" Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.805879 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-5tgzd" event={"ID":"c5e88418-60bd-44ee-8272-245ee92460c6","Type":"ContainerDied","Data":"5f1a68560bd3185e7a017b7d8df92bb22b87836b9e153298d591142930d4d214"} Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.805934 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f1a68560bd3185e7a017b7d8df92bb22b87836b9e153298d591142930d4d214" Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.807678 5014 generic.go:334] "Generic (PLEG): container finished" podID="37d803e4-987c-47c4-b9d6-de67dc94cc6a" containerID="b33503c01f217bb8814daaed37a99363d79c788b917005ec40468867d5075f2a" exitCode=0 Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.807762 5014 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.807772 5014 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.809170 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" event={"ID":"37d803e4-987c-47c4-b9d6-de67dc94cc6a","Type":"ContainerDied","Data":"b33503c01f217bb8814daaed37a99363d79c788b917005ec40468867d5075f2a"} Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.809203 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" event={"ID":"37d803e4-987c-47c4-b9d6-de67dc94cc6a","Type":"ContainerStarted","Data":"553bbc6f0af1382b94230df75c8bf1516916b01b3bfb723dbebaffbd4b94af38"} Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.809237 5014 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.809245 5014 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.829624 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-c9b9j" podStartSLOduration=4.155814784 podStartE2EDuration="47.829606374s" podCreationTimestamp="2026-02-28 04:53:11 +0000 UTC" firstStartedPulling="2026-02-28 04:53:12.946048638 +0000 UTC m=+1181.616174548" lastFinishedPulling="2026-02-28 04:53:56.619840218 +0000 UTC m=+1225.289966138" observedRunningTime="2026-02-28 04:53:58.821174697 +0000 UTC m=+1227.491300607" watchObservedRunningTime="2026-02-28 04:53:58.829606374 +0000 UTC m=+1227.499732284" Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.982908 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-5cd4874894-s6tz4"] Feb 28 04:53:58 crc kubenswrapper[5014]: E0228 04:53:58.985086 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5e88418-60bd-44ee-8272-245ee92460c6" containerName="placement-db-sync" Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.985107 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5e88418-60bd-44ee-8272-245ee92460c6" containerName="placement-db-sync" Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.985298 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5e88418-60bd-44ee-8272-245ee92460c6" containerName="placement-db-sync" Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.986155 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5cd4874894-s6tz4"] Feb 28 04:53:58 crc kubenswrapper[5014]: I0228 04:53:58.986244 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:58.997290 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:58.997557 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:58.997777 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:58.997916 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-7fgkr" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.007167 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.118306 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c690f68f-407a-4db7-a99c-67cfa5a5833b-combined-ca-bundle\") pod \"placement-5cd4874894-s6tz4\" (UID: \"c690f68f-407a-4db7-a99c-67cfa5a5833b\") " pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.118358 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c690f68f-407a-4db7-a99c-67cfa5a5833b-public-tls-certs\") pod \"placement-5cd4874894-s6tz4\" (UID: \"c690f68f-407a-4db7-a99c-67cfa5a5833b\") " pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.118633 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c690f68f-407a-4db7-a99c-67cfa5a5833b-logs\") pod \"placement-5cd4874894-s6tz4\" (UID: \"c690f68f-407a-4db7-a99c-67cfa5a5833b\") " pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.118671 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct4bq\" (UniqueName: \"kubernetes.io/projected/c690f68f-407a-4db7-a99c-67cfa5a5833b-kube-api-access-ct4bq\") pod \"placement-5cd4874894-s6tz4\" (UID: \"c690f68f-407a-4db7-a99c-67cfa5a5833b\") " pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.118758 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c690f68f-407a-4db7-a99c-67cfa5a5833b-scripts\") pod \"placement-5cd4874894-s6tz4\" (UID: \"c690f68f-407a-4db7-a99c-67cfa5a5833b\") " pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.118854 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c690f68f-407a-4db7-a99c-67cfa5a5833b-config-data\") pod \"placement-5cd4874894-s6tz4\" (UID: \"c690f68f-407a-4db7-a99c-67cfa5a5833b\") " pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.118874 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c690f68f-407a-4db7-a99c-67cfa5a5833b-internal-tls-certs\") pod \"placement-5cd4874894-s6tz4\" (UID: \"c690f68f-407a-4db7-a99c-67cfa5a5833b\") " pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.219967 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c690f68f-407a-4db7-a99c-67cfa5a5833b-combined-ca-bundle\") pod \"placement-5cd4874894-s6tz4\" (UID: \"c690f68f-407a-4db7-a99c-67cfa5a5833b\") " pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.220024 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c690f68f-407a-4db7-a99c-67cfa5a5833b-public-tls-certs\") pod \"placement-5cd4874894-s6tz4\" (UID: \"c690f68f-407a-4db7-a99c-67cfa5a5833b\") " pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.220085 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c690f68f-407a-4db7-a99c-67cfa5a5833b-logs\") pod \"placement-5cd4874894-s6tz4\" (UID: \"c690f68f-407a-4db7-a99c-67cfa5a5833b\") " pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.220115 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct4bq\" (UniqueName: \"kubernetes.io/projected/c690f68f-407a-4db7-a99c-67cfa5a5833b-kube-api-access-ct4bq\") pod \"placement-5cd4874894-s6tz4\" (UID: \"c690f68f-407a-4db7-a99c-67cfa5a5833b\") " pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.220177 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c690f68f-407a-4db7-a99c-67cfa5a5833b-scripts\") pod \"placement-5cd4874894-s6tz4\" (UID: \"c690f68f-407a-4db7-a99c-67cfa5a5833b\") " pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.220221 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c690f68f-407a-4db7-a99c-67cfa5a5833b-config-data\") pod \"placement-5cd4874894-s6tz4\" (UID: \"c690f68f-407a-4db7-a99c-67cfa5a5833b\") " pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.220240 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c690f68f-407a-4db7-a99c-67cfa5a5833b-internal-tls-certs\") pod \"placement-5cd4874894-s6tz4\" (UID: \"c690f68f-407a-4db7-a99c-67cfa5a5833b\") " pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.221307 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c690f68f-407a-4db7-a99c-67cfa5a5833b-logs\") pod \"placement-5cd4874894-s6tz4\" (UID: \"c690f68f-407a-4db7-a99c-67cfa5a5833b\") " pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.225878 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c690f68f-407a-4db7-a99c-67cfa5a5833b-internal-tls-certs\") pod \"placement-5cd4874894-s6tz4\" (UID: \"c690f68f-407a-4db7-a99c-67cfa5a5833b\") " pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.228522 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c690f68f-407a-4db7-a99c-67cfa5a5833b-public-tls-certs\") pod \"placement-5cd4874894-s6tz4\" (UID: \"c690f68f-407a-4db7-a99c-67cfa5a5833b\") " pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.229122 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c690f68f-407a-4db7-a99c-67cfa5a5833b-config-data\") pod \"placement-5cd4874894-s6tz4\" (UID: \"c690f68f-407a-4db7-a99c-67cfa5a5833b\") " pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.229343 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c690f68f-407a-4db7-a99c-67cfa5a5833b-combined-ca-bundle\") pod \"placement-5cd4874894-s6tz4\" (UID: \"c690f68f-407a-4db7-a99c-67cfa5a5833b\") " pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.229617 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c690f68f-407a-4db7-a99c-67cfa5a5833b-scripts\") pod \"placement-5cd4874894-s6tz4\" (UID: \"c690f68f-407a-4db7-a99c-67cfa5a5833b\") " pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.259176 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct4bq\" (UniqueName: \"kubernetes.io/projected/c690f68f-407a-4db7-a99c-67cfa5a5833b-kube-api-access-ct4bq\") pod \"placement-5cd4874894-s6tz4\" (UID: \"c690f68f-407a-4db7-a99c-67cfa5a5833b\") " pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.334650 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.375262 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.619015 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-58dcfcf9bc-4rtlk"] Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.620562 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.626099 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.626362 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.633292 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58dcfcf9bc-4rtlk"] Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.729611 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1de3f60c-6e45-4b05-84eb-749e470d4595-public-tls-certs\") pod \"neutron-58dcfcf9bc-4rtlk\" (UID: \"1de3f60c-6e45-4b05-84eb-749e470d4595\") " pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.729700 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1de3f60c-6e45-4b05-84eb-749e470d4595-ovndb-tls-certs\") pod \"neutron-58dcfcf9bc-4rtlk\" (UID: \"1de3f60c-6e45-4b05-84eb-749e470d4595\") " pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.729817 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1de3f60c-6e45-4b05-84eb-749e470d4595-internal-tls-certs\") pod \"neutron-58dcfcf9bc-4rtlk\" (UID: \"1de3f60c-6e45-4b05-84eb-749e470d4595\") " pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.729842 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de3f60c-6e45-4b05-84eb-749e470d4595-combined-ca-bundle\") pod \"neutron-58dcfcf9bc-4rtlk\" (UID: \"1de3f60c-6e45-4b05-84eb-749e470d4595\") " pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.729860 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-657nb\" (UniqueName: \"kubernetes.io/projected/1de3f60c-6e45-4b05-84eb-749e470d4595-kube-api-access-657nb\") pod \"neutron-58dcfcf9bc-4rtlk\" (UID: \"1de3f60c-6e45-4b05-84eb-749e470d4595\") " pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.729917 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1de3f60c-6e45-4b05-84eb-749e470d4595-httpd-config\") pod \"neutron-58dcfcf9bc-4rtlk\" (UID: \"1de3f60c-6e45-4b05-84eb-749e470d4595\") " pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.729938 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1de3f60c-6e45-4b05-84eb-749e470d4595-config\") pod \"neutron-58dcfcf9bc-4rtlk\" (UID: \"1de3f60c-6e45-4b05-84eb-749e470d4595\") " pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.820470 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-94f64597b-rtxdm" event={"ID":"00ee4598-7f76-410b-8737-7086fd0b5aad","Type":"ContainerStarted","Data":"297640f854d1ad0b2237e0bf2efb25418366a3da9d2d0a0b0ff30285ecba1b3c"} Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.820547 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-94f64597b-rtxdm" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.823107 5014 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.824302 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" event={"ID":"37d803e4-987c-47c4-b9d6-de67dc94cc6a","Type":"ContainerStarted","Data":"762349f30bf58ad68e674890cdd67824557554f4a2ffa653f72548b18a7f4619"} Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.824368 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.831103 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1de3f60c-6e45-4b05-84eb-749e470d4595-public-tls-certs\") pod \"neutron-58dcfcf9bc-4rtlk\" (UID: \"1de3f60c-6e45-4b05-84eb-749e470d4595\") " pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.831235 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1de3f60c-6e45-4b05-84eb-749e470d4595-ovndb-tls-certs\") pod \"neutron-58dcfcf9bc-4rtlk\" (UID: \"1de3f60c-6e45-4b05-84eb-749e470d4595\") " pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.831344 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1de3f60c-6e45-4b05-84eb-749e470d4595-internal-tls-certs\") pod \"neutron-58dcfcf9bc-4rtlk\" (UID: \"1de3f60c-6e45-4b05-84eb-749e470d4595\") " pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.831424 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de3f60c-6e45-4b05-84eb-749e470d4595-combined-ca-bundle\") pod \"neutron-58dcfcf9bc-4rtlk\" (UID: \"1de3f60c-6e45-4b05-84eb-749e470d4595\") " pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.831490 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-657nb\" (UniqueName: \"kubernetes.io/projected/1de3f60c-6e45-4b05-84eb-749e470d4595-kube-api-access-657nb\") pod \"neutron-58dcfcf9bc-4rtlk\" (UID: \"1de3f60c-6e45-4b05-84eb-749e470d4595\") " pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.831572 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1de3f60c-6e45-4b05-84eb-749e470d4595-httpd-config\") pod \"neutron-58dcfcf9bc-4rtlk\" (UID: \"1de3f60c-6e45-4b05-84eb-749e470d4595\") " pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.831646 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1de3f60c-6e45-4b05-84eb-749e470d4595-config\") pod \"neutron-58dcfcf9bc-4rtlk\" (UID: \"1de3f60c-6e45-4b05-84eb-749e470d4595\") " pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.838185 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1de3f60c-6e45-4b05-84eb-749e470d4595-ovndb-tls-certs\") pod \"neutron-58dcfcf9bc-4rtlk\" (UID: \"1de3f60c-6e45-4b05-84eb-749e470d4595\") " pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.838968 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1de3f60c-6e45-4b05-84eb-749e470d4595-httpd-config\") pod \"neutron-58dcfcf9bc-4rtlk\" (UID: \"1de3f60c-6e45-4b05-84eb-749e470d4595\") " pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.840846 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-94f64597b-rtxdm" podStartSLOduration=2.840828996 podStartE2EDuration="2.840828996s" podCreationTimestamp="2026-02-28 04:53:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:53:59.836359975 +0000 UTC m=+1228.506485895" watchObservedRunningTime="2026-02-28 04:53:59.840828996 +0000 UTC m=+1228.510954906" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.851611 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1de3f60c-6e45-4b05-84eb-749e470d4595-config\") pod \"neutron-58dcfcf9bc-4rtlk\" (UID: \"1de3f60c-6e45-4b05-84eb-749e470d4595\") " pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.852619 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1de3f60c-6e45-4b05-84eb-749e470d4595-internal-tls-certs\") pod \"neutron-58dcfcf9bc-4rtlk\" (UID: \"1de3f60c-6e45-4b05-84eb-749e470d4595\") " pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.853640 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1de3f60c-6e45-4b05-84eb-749e470d4595-public-tls-certs\") pod \"neutron-58dcfcf9bc-4rtlk\" (UID: \"1de3f60c-6e45-4b05-84eb-749e470d4595\") " pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.859784 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" podStartSLOduration=3.859770846 podStartE2EDuration="3.859770846s" podCreationTimestamp="2026-02-28 04:53:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:53:59.857159856 +0000 UTC m=+1228.527285766" watchObservedRunningTime="2026-02-28 04:53:59.859770846 +0000 UTC m=+1228.529896756" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.859734 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1de3f60c-6e45-4b05-84eb-749e470d4595-combined-ca-bundle\") pod \"neutron-58dcfcf9bc-4rtlk\" (UID: \"1de3f60c-6e45-4b05-84eb-749e470d4595\") " pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.862037 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-657nb\" (UniqueName: \"kubernetes.io/projected/1de3f60c-6e45-4b05-84eb-749e470d4595-kube-api-access-657nb\") pod \"neutron-58dcfcf9bc-4rtlk\" (UID: \"1de3f60c-6e45-4b05-84eb-749e470d4595\") " pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.900776 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.908229 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.913740 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-5cd4874894-s6tz4"] Feb 28 04:53:59 crc kubenswrapper[5014]: I0228 04:53:59.950874 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:54:00 crc kubenswrapper[5014]: I0228 04:54:00.020585 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 28 04:54:00 crc kubenswrapper[5014]: I0228 04:54:00.150672 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537574-x6j2n"] Feb 28 04:54:00 crc kubenswrapper[5014]: I0228 04:54:00.151973 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537574-x6j2n" Feb 28 04:54:00 crc kubenswrapper[5014]: I0228 04:54:00.154260 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:54:00 crc kubenswrapper[5014]: I0228 04:54:00.154419 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:54:00 crc kubenswrapper[5014]: I0228 04:54:00.154612 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 04:54:00 crc kubenswrapper[5014]: I0228 04:54:00.222367 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537574-x6j2n"] Feb 28 04:54:00 crc kubenswrapper[5014]: I0228 04:54:00.262351 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgsxc\" (UniqueName: \"kubernetes.io/projected/5d33afd2-3959-4f00-8c82-1b46cb382721-kube-api-access-tgsxc\") pod \"auto-csr-approver-29537574-x6j2n\" (UID: \"5d33afd2-3959-4f00-8c82-1b46cb382721\") " pod="openshift-infra/auto-csr-approver-29537574-x6j2n" Feb 28 04:54:00 crc kubenswrapper[5014]: I0228 04:54:00.364336 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgsxc\" (UniqueName: \"kubernetes.io/projected/5d33afd2-3959-4f00-8c82-1b46cb382721-kube-api-access-tgsxc\") pod \"auto-csr-approver-29537574-x6j2n\" (UID: \"5d33afd2-3959-4f00-8c82-1b46cb382721\") " pod="openshift-infra/auto-csr-approver-29537574-x6j2n" Feb 28 04:54:00 crc kubenswrapper[5014]: I0228 04:54:00.383836 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgsxc\" (UniqueName: \"kubernetes.io/projected/5d33afd2-3959-4f00-8c82-1b46cb382721-kube-api-access-tgsxc\") pod \"auto-csr-approver-29537574-x6j2n\" (UID: \"5d33afd2-3959-4f00-8c82-1b46cb382721\") " pod="openshift-infra/auto-csr-approver-29537574-x6j2n" Feb 28 04:54:00 crc kubenswrapper[5014]: I0228 04:54:00.521060 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537574-x6j2n" Feb 28 04:54:00 crc kubenswrapper[5014]: I0228 04:54:00.715719 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-58dcfcf9bc-4rtlk"] Feb 28 04:54:00 crc kubenswrapper[5014]: W0228 04:54:00.736294 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1de3f60c_6e45_4b05_84eb_749e470d4595.slice/crio-61a5e1fbd3cf9c3de38048da4986ba3ac145af47d036c55d6b1e86bdb1a224cd WatchSource:0}: Error finding container 61a5e1fbd3cf9c3de38048da4986ba3ac145af47d036c55d6b1e86bdb1a224cd: Status 404 returned error can't find the container with id 61a5e1fbd3cf9c3de38048da4986ba3ac145af47d036c55d6b1e86bdb1a224cd Feb 28 04:54:00 crc kubenswrapper[5014]: I0228 04:54:00.844396 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58dcfcf9bc-4rtlk" event={"ID":"1de3f60c-6e45-4b05-84eb-749e470d4595","Type":"ContainerStarted","Data":"61a5e1fbd3cf9c3de38048da4986ba3ac145af47d036c55d6b1e86bdb1a224cd"} Feb 28 04:54:00 crc kubenswrapper[5014]: I0228 04:54:00.873571 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5cd4874894-s6tz4" event={"ID":"c690f68f-407a-4db7-a99c-67cfa5a5833b","Type":"ContainerStarted","Data":"d046491a049927c506b3843f3226eb29502c2d9f3ce8a15c8a5b6572325f8270"} Feb 28 04:54:00 crc kubenswrapper[5014]: I0228 04:54:00.873650 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5cd4874894-s6tz4" event={"ID":"c690f68f-407a-4db7-a99c-67cfa5a5833b","Type":"ContainerStarted","Data":"1149604fa3603cc36dec64dbf3d7486011128ed924ad397950890931f330592a"} Feb 28 04:54:00 crc kubenswrapper[5014]: I0228 04:54:00.873924 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:54:00 crc kubenswrapper[5014]: I0228 04:54:00.873949 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:54:00 crc kubenswrapper[5014]: I0228 04:54:00.873961 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-5cd4874894-s6tz4" event={"ID":"c690f68f-407a-4db7-a99c-67cfa5a5833b","Type":"ContainerStarted","Data":"ca5ac1b69a39ecba5dc1d59a7aa2d28473d02b62e2d14f07fceba585080de048"} Feb 28 04:54:00 crc kubenswrapper[5014]: I0228 04:54:00.883501 5014 generic.go:334] "Generic (PLEG): container finished" podID="57f91015-35f5-486c-a88c-0a90f76724e5" containerID="888a50ebc86cbb9c2fa123861d89d1e67c733c3751e4c8a0e65f5b6ff951cd7e" exitCode=0 Feb 28 04:54:00 crc kubenswrapper[5014]: I0228 04:54:00.884383 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wxq9x" event={"ID":"57f91015-35f5-486c-a88c-0a90f76724e5","Type":"ContainerDied","Data":"888a50ebc86cbb9c2fa123861d89d1e67c733c3751e4c8a0e65f5b6ff951cd7e"} Feb 28 04:54:00 crc kubenswrapper[5014]: I0228 04:54:00.931150 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-5cd4874894-s6tz4" podStartSLOduration=2.93112618 podStartE2EDuration="2.93112618s" podCreationTimestamp="2026-02-28 04:53:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:54:00.901307256 +0000 UTC m=+1229.571433166" watchObservedRunningTime="2026-02-28 04:54:00.93112618 +0000 UTC m=+1229.601252090" Feb 28 04:54:01 crc kubenswrapper[5014]: I0228 04:54:01.051703 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537574-x6j2n"] Feb 28 04:54:01 crc kubenswrapper[5014]: W0228 04:54:01.079283 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d33afd2_3959_4f00_8c82_1b46cb382721.slice/crio-829c238fa2adaf50d8bdaaa8c41ceccd0c75f872cdf884ec58863ca241e6eee5 WatchSource:0}: Error finding container 829c238fa2adaf50d8bdaaa8c41ceccd0c75f872cdf884ec58863ca241e6eee5: Status 404 returned error can't find the container with id 829c238fa2adaf50d8bdaaa8c41ceccd0c75f872cdf884ec58863ca241e6eee5 Feb 28 04:54:01 crc kubenswrapper[5014]: I0228 04:54:01.896639 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537574-x6j2n" event={"ID":"5d33afd2-3959-4f00-8c82-1b46cb382721","Type":"ContainerStarted","Data":"829c238fa2adaf50d8bdaaa8c41ceccd0c75f872cdf884ec58863ca241e6eee5"} Feb 28 04:54:01 crc kubenswrapper[5014]: I0228 04:54:01.898866 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58dcfcf9bc-4rtlk" event={"ID":"1de3f60c-6e45-4b05-84eb-749e470d4595","Type":"ContainerStarted","Data":"8852d83476f63a07fb05cb91637a5e43535cbf25a60381800b0a435994769da5"} Feb 28 04:54:01 crc kubenswrapper[5014]: I0228 04:54:01.898914 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-58dcfcf9bc-4rtlk" event={"ID":"1de3f60c-6e45-4b05-84eb-749e470d4595","Type":"ContainerStarted","Data":"fc1aeffcd4dd1463303cb85d50492906ac02e01ff1cafff62bc5c54ab9bf9a7b"} Feb 28 04:54:01 crc kubenswrapper[5014]: I0228 04:54:01.915703 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-58dcfcf9bc-4rtlk" podStartSLOduration=2.915685813 podStartE2EDuration="2.915685813s" podCreationTimestamp="2026-02-28 04:53:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:54:01.914714887 +0000 UTC m=+1230.584840797" watchObservedRunningTime="2026-02-28 04:54:01.915685813 +0000 UTC m=+1230.585811723" Feb 28 04:54:02 crc kubenswrapper[5014]: I0228 04:54:02.313674 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wxq9x" Feb 28 04:54:02 crc kubenswrapper[5014]: I0228 04:54:02.416501 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f91015-35f5-486c-a88c-0a90f76724e5-combined-ca-bundle\") pod \"57f91015-35f5-486c-a88c-0a90f76724e5\" (UID: \"57f91015-35f5-486c-a88c-0a90f76724e5\") " Feb 28 04:54:02 crc kubenswrapper[5014]: I0228 04:54:02.416768 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5c7j\" (UniqueName: \"kubernetes.io/projected/57f91015-35f5-486c-a88c-0a90f76724e5-kube-api-access-v5c7j\") pod \"57f91015-35f5-486c-a88c-0a90f76724e5\" (UID: \"57f91015-35f5-486c-a88c-0a90f76724e5\") " Feb 28 04:54:02 crc kubenswrapper[5014]: I0228 04:54:02.416827 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57f91015-35f5-486c-a88c-0a90f76724e5-db-sync-config-data\") pod \"57f91015-35f5-486c-a88c-0a90f76724e5\" (UID: \"57f91015-35f5-486c-a88c-0a90f76724e5\") " Feb 28 04:54:02 crc kubenswrapper[5014]: I0228 04:54:02.421217 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57f91015-35f5-486c-a88c-0a90f76724e5-kube-api-access-v5c7j" (OuterVolumeSpecName: "kube-api-access-v5c7j") pod "57f91015-35f5-486c-a88c-0a90f76724e5" (UID: "57f91015-35f5-486c-a88c-0a90f76724e5"). InnerVolumeSpecName "kube-api-access-v5c7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:54:02 crc kubenswrapper[5014]: I0228 04:54:02.428248 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f91015-35f5-486c-a88c-0a90f76724e5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "57f91015-35f5-486c-a88c-0a90f76724e5" (UID: "57f91015-35f5-486c-a88c-0a90f76724e5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:02 crc kubenswrapper[5014]: I0228 04:54:02.442450 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f91015-35f5-486c-a88c-0a90f76724e5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57f91015-35f5-486c-a88c-0a90f76724e5" (UID: "57f91015-35f5-486c-a88c-0a90f76724e5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:02 crc kubenswrapper[5014]: I0228 04:54:02.518485 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5c7j\" (UniqueName: \"kubernetes.io/projected/57f91015-35f5-486c-a88c-0a90f76724e5-kube-api-access-v5c7j\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:02 crc kubenswrapper[5014]: I0228 04:54:02.518511 5014 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57f91015-35f5-486c-a88c-0a90f76724e5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:02 crc kubenswrapper[5014]: I0228 04:54:02.518521 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f91015-35f5-486c-a88c-0a90f76724e5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:02 crc kubenswrapper[5014]: I0228 04:54:02.910333 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-wxq9x" event={"ID":"57f91015-35f5-486c-a88c-0a90f76724e5","Type":"ContainerDied","Data":"39a866eb31f65b2b9453c1b418776eb2164d6d45a055c6c2f12d6813885225fc"} Feb 28 04:54:02 crc kubenswrapper[5014]: I0228 04:54:02.910638 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39a866eb31f65b2b9453c1b418776eb2164d6d45a055c6c2f12d6813885225fc" Feb 28 04:54:02 crc kubenswrapper[5014]: I0228 04:54:02.910344 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-wxq9x" Feb 28 04:54:02 crc kubenswrapper[5014]: I0228 04:54:02.911897 5014 generic.go:334] "Generic (PLEG): container finished" podID="5d33afd2-3959-4f00-8c82-1b46cb382721" containerID="2432b29de6293f4a28983a3721b988c96c505c8d540151c1682b7fef2ac9c405" exitCode=0 Feb 28 04:54:02 crc kubenswrapper[5014]: I0228 04:54:02.912770 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537574-x6j2n" event={"ID":"5d33afd2-3959-4f00-8c82-1b46cb382721","Type":"ContainerDied","Data":"2432b29de6293f4a28983a3721b988c96c505c8d540151c1682b7fef2ac9c405"} Feb 28 04:54:02 crc kubenswrapper[5014]: I0228 04:54:02.912794 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.167641 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-76c688b599-br8wc"] Feb 28 04:54:03 crc kubenswrapper[5014]: E0228 04:54:03.168136 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f91015-35f5-486c-a88c-0a90f76724e5" containerName="barbican-db-sync" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.168161 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f91015-35f5-486c-a88c-0a90f76724e5" containerName="barbican-db-sync" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.168415 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="57f91015-35f5-486c-a88c-0a90f76724e5" containerName="barbican-db-sync" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.170962 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-76c688b599-br8wc" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.173796 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.174506 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.174761 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-xmrn4" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.220481 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-76c688b599-br8wc"] Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.243613 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdfwf\" (UniqueName: \"kubernetes.io/projected/45715325-beb1-4639-bb3c-d466fc6e85ce-kube-api-access-jdfwf\") pod \"barbican-worker-76c688b599-br8wc\" (UID: \"45715325-beb1-4639-bb3c-d466fc6e85ce\") " pod="openstack/barbican-worker-76c688b599-br8wc" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.243903 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45715325-beb1-4639-bb3c-d466fc6e85ce-config-data\") pod \"barbican-worker-76c688b599-br8wc\" (UID: \"45715325-beb1-4639-bb3c-d466fc6e85ce\") " pod="openstack/barbican-worker-76c688b599-br8wc" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.244000 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45715325-beb1-4639-bb3c-d466fc6e85ce-combined-ca-bundle\") pod \"barbican-worker-76c688b599-br8wc\" (UID: \"45715325-beb1-4639-bb3c-d466fc6e85ce\") " pod="openstack/barbican-worker-76c688b599-br8wc" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.244152 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45715325-beb1-4639-bb3c-d466fc6e85ce-config-data-custom\") pod \"barbican-worker-76c688b599-br8wc\" (UID: \"45715325-beb1-4639-bb3c-d466fc6e85ce\") " pod="openstack/barbican-worker-76c688b599-br8wc" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.244174 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45715325-beb1-4639-bb3c-d466fc6e85ce-logs\") pod \"barbican-worker-76c688b599-br8wc\" (UID: \"45715325-beb1-4639-bb3c-d466fc6e85ce\") " pod="openstack/barbican-worker-76c688b599-br8wc" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.244357 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-7dd8f4645d-ckwth"] Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.246585 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7dd8f4645d-ckwth" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.252141 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.273112 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7dd8f4645d-ckwth"] Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.319695 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-zgd7q"] Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.319997 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" podUID="37d803e4-987c-47c4-b9d6-de67dc94cc6a" containerName="dnsmasq-dns" containerID="cri-o://762349f30bf58ad68e674890cdd67824557554f4a2ffa653f72548b18a7f4619" gracePeriod=10 Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.337762 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-rkjmm"] Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.339786 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.345710 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45715325-beb1-4639-bb3c-d466fc6e85ce-config-data\") pod \"barbican-worker-76c688b599-br8wc\" (UID: \"45715325-beb1-4639-bb3c-d466fc6e85ce\") " pod="openstack/barbican-worker-76c688b599-br8wc" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.345770 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd8db062-b379-402e-a83b-291ee7e55bf1-config-data-custom\") pod \"barbican-keystone-listener-7dd8f4645d-ckwth\" (UID: \"bd8db062-b379-402e-a83b-291ee7e55bf1\") " pod="openstack/barbican-keystone-listener-7dd8f4645d-ckwth" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.345832 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45715325-beb1-4639-bb3c-d466fc6e85ce-combined-ca-bundle\") pod \"barbican-worker-76c688b599-br8wc\" (UID: \"45715325-beb1-4639-bb3c-d466fc6e85ce\") " pod="openstack/barbican-worker-76c688b599-br8wc" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.345862 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd8db062-b379-402e-a83b-291ee7e55bf1-logs\") pod \"barbican-keystone-listener-7dd8f4645d-ckwth\" (UID: \"bd8db062-b379-402e-a83b-291ee7e55bf1\") " pod="openstack/barbican-keystone-listener-7dd8f4645d-ckwth" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.345909 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd8db062-b379-402e-a83b-291ee7e55bf1-combined-ca-bundle\") pod \"barbican-keystone-listener-7dd8f4645d-ckwth\" (UID: \"bd8db062-b379-402e-a83b-291ee7e55bf1\") " pod="openstack/barbican-keystone-listener-7dd8f4645d-ckwth" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.345958 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd8db062-b379-402e-a83b-291ee7e55bf1-config-data\") pod \"barbican-keystone-listener-7dd8f4645d-ckwth\" (UID: \"bd8db062-b379-402e-a83b-291ee7e55bf1\") " pod="openstack/barbican-keystone-listener-7dd8f4645d-ckwth" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.346039 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45715325-beb1-4639-bb3c-d466fc6e85ce-config-data-custom\") pod \"barbican-worker-76c688b599-br8wc\" (UID: \"45715325-beb1-4639-bb3c-d466fc6e85ce\") " pod="openstack/barbican-worker-76c688b599-br8wc" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.346063 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45715325-beb1-4639-bb3c-d466fc6e85ce-logs\") pod \"barbican-worker-76c688b599-br8wc\" (UID: \"45715325-beb1-4639-bb3c-d466fc6e85ce\") " pod="openstack/barbican-worker-76c688b599-br8wc" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.346104 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmcxs\" (UniqueName: \"kubernetes.io/projected/bd8db062-b379-402e-a83b-291ee7e55bf1-kube-api-access-jmcxs\") pod \"barbican-keystone-listener-7dd8f4645d-ckwth\" (UID: \"bd8db062-b379-402e-a83b-291ee7e55bf1\") " pod="openstack/barbican-keystone-listener-7dd8f4645d-ckwth" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.346171 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdfwf\" (UniqueName: \"kubernetes.io/projected/45715325-beb1-4639-bb3c-d466fc6e85ce-kube-api-access-jdfwf\") pod \"barbican-worker-76c688b599-br8wc\" (UID: \"45715325-beb1-4639-bb3c-d466fc6e85ce\") " pod="openstack/barbican-worker-76c688b599-br8wc" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.352377 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45715325-beb1-4639-bb3c-d466fc6e85ce-logs\") pod \"barbican-worker-76c688b599-br8wc\" (UID: \"45715325-beb1-4639-bb3c-d466fc6e85ce\") " pod="openstack/barbican-worker-76c688b599-br8wc" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.356755 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45715325-beb1-4639-bb3c-d466fc6e85ce-config-data-custom\") pod \"barbican-worker-76c688b599-br8wc\" (UID: \"45715325-beb1-4639-bb3c-d466fc6e85ce\") " pod="openstack/barbican-worker-76c688b599-br8wc" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.357321 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45715325-beb1-4639-bb3c-d466fc6e85ce-config-data\") pod \"barbican-worker-76c688b599-br8wc\" (UID: \"45715325-beb1-4639-bb3c-d466fc6e85ce\") " pod="openstack/barbican-worker-76c688b599-br8wc" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.357936 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45715325-beb1-4639-bb3c-d466fc6e85ce-combined-ca-bundle\") pod \"barbican-worker-76c688b599-br8wc\" (UID: \"45715325-beb1-4639-bb3c-d466fc6e85ce\") " pod="openstack/barbican-worker-76c688b599-br8wc" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.360856 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-rkjmm"] Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.395889 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdfwf\" (UniqueName: \"kubernetes.io/projected/45715325-beb1-4639-bb3c-d466fc6e85ce-kube-api-access-jdfwf\") pod \"barbican-worker-76c688b599-br8wc\" (UID: \"45715325-beb1-4639-bb3c-d466fc6e85ce\") " pod="openstack/barbican-worker-76c688b599-br8wc" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.447898 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-rkjmm\" (UID: \"6f7daae1-d4e4-4396-b788-d49aef714ae4\") " pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.447967 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd8db062-b379-402e-a83b-291ee7e55bf1-combined-ca-bundle\") pod \"barbican-keystone-listener-7dd8f4645d-ckwth\" (UID: \"bd8db062-b379-402e-a83b-291ee7e55bf1\") " pod="openstack/barbican-keystone-listener-7dd8f4645d-ckwth" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.448006 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-rkjmm\" (UID: \"6f7daae1-d4e4-4396-b788-d49aef714ae4\") " pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.448052 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd8db062-b379-402e-a83b-291ee7e55bf1-config-data\") pod \"barbican-keystone-listener-7dd8f4645d-ckwth\" (UID: \"bd8db062-b379-402e-a83b-291ee7e55bf1\") " pod="openstack/barbican-keystone-listener-7dd8f4645d-ckwth" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.448109 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-rkjmm\" (UID: \"6f7daae1-d4e4-4396-b788-d49aef714ae4\") " pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.448140 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-config\") pod \"dnsmasq-dns-85ff748b95-rkjmm\" (UID: \"6f7daae1-d4e4-4396-b788-d49aef714ae4\") " pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.448166 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmcxs\" (UniqueName: \"kubernetes.io/projected/bd8db062-b379-402e-a83b-291ee7e55bf1-kube-api-access-jmcxs\") pod \"barbican-keystone-listener-7dd8f4645d-ckwth\" (UID: \"bd8db062-b379-402e-a83b-291ee7e55bf1\") " pod="openstack/barbican-keystone-listener-7dd8f4645d-ckwth" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.448212 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4w6cg\" (UniqueName: \"kubernetes.io/projected/6f7daae1-d4e4-4396-b788-d49aef714ae4-kube-api-access-4w6cg\") pod \"dnsmasq-dns-85ff748b95-rkjmm\" (UID: \"6f7daae1-d4e4-4396-b788-d49aef714ae4\") " pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.448284 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-dns-svc\") pod \"dnsmasq-dns-85ff748b95-rkjmm\" (UID: \"6f7daae1-d4e4-4396-b788-d49aef714ae4\") " pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.448330 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd8db062-b379-402e-a83b-291ee7e55bf1-config-data-custom\") pod \"barbican-keystone-listener-7dd8f4645d-ckwth\" (UID: \"bd8db062-b379-402e-a83b-291ee7e55bf1\") " pod="openstack/barbican-keystone-listener-7dd8f4645d-ckwth" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.448375 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd8db062-b379-402e-a83b-291ee7e55bf1-logs\") pod \"barbican-keystone-listener-7dd8f4645d-ckwth\" (UID: \"bd8db062-b379-402e-a83b-291ee7e55bf1\") " pod="openstack/barbican-keystone-listener-7dd8f4645d-ckwth" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.448890 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd8db062-b379-402e-a83b-291ee7e55bf1-logs\") pod \"barbican-keystone-listener-7dd8f4645d-ckwth\" (UID: \"bd8db062-b379-402e-a83b-291ee7e55bf1\") " pod="openstack/barbican-keystone-listener-7dd8f4645d-ckwth" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.453035 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd8db062-b379-402e-a83b-291ee7e55bf1-config-data\") pod \"barbican-keystone-listener-7dd8f4645d-ckwth\" (UID: \"bd8db062-b379-402e-a83b-291ee7e55bf1\") " pod="openstack/barbican-keystone-listener-7dd8f4645d-ckwth" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.461015 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bd8db062-b379-402e-a83b-291ee7e55bf1-config-data-custom\") pod \"barbican-keystone-listener-7dd8f4645d-ckwth\" (UID: \"bd8db062-b379-402e-a83b-291ee7e55bf1\") " pod="openstack/barbican-keystone-listener-7dd8f4645d-ckwth" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.461043 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd8db062-b379-402e-a83b-291ee7e55bf1-combined-ca-bundle\") pod \"barbican-keystone-listener-7dd8f4645d-ckwth\" (UID: \"bd8db062-b379-402e-a83b-291ee7e55bf1\") " pod="openstack/barbican-keystone-listener-7dd8f4645d-ckwth" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.472636 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5b8b564f66-hrmxb"] Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.474214 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b8b564f66-hrmxb" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.477064 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.503673 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b8b564f66-hrmxb"] Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.504527 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmcxs\" (UniqueName: \"kubernetes.io/projected/bd8db062-b379-402e-a83b-291ee7e55bf1-kube-api-access-jmcxs\") pod \"barbican-keystone-listener-7dd8f4645d-ckwth\" (UID: \"bd8db062-b379-402e-a83b-291ee7e55bf1\") " pod="openstack/barbican-keystone-listener-7dd8f4645d-ckwth" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.511425 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-76c688b599-br8wc" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.556954 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-rkjmm\" (UID: \"6f7daae1-d4e4-4396-b788-d49aef714ae4\") " pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.557024 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-config\") pod \"dnsmasq-dns-85ff748b95-rkjmm\" (UID: \"6f7daae1-d4e4-4396-b788-d49aef714ae4\") " pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.557096 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4w6cg\" (UniqueName: \"kubernetes.io/projected/6f7daae1-d4e4-4396-b788-d49aef714ae4-kube-api-access-4w6cg\") pod \"dnsmasq-dns-85ff748b95-rkjmm\" (UID: \"6f7daae1-d4e4-4396-b788-d49aef714ae4\") " pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.557166 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e048509e-80c3-4102-b7a6-bb3d30f06ec1-config-data\") pod \"barbican-api-5b8b564f66-hrmxb\" (UID: \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\") " pod="openstack/barbican-api-5b8b564f66-hrmxb" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.557211 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e048509e-80c3-4102-b7a6-bb3d30f06ec1-config-data-custom\") pod \"barbican-api-5b8b564f66-hrmxb\" (UID: \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\") " pod="openstack/barbican-api-5b8b564f66-hrmxb" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.557234 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-dns-svc\") pod \"dnsmasq-dns-85ff748b95-rkjmm\" (UID: \"6f7daae1-d4e4-4396-b788-d49aef714ae4\") " pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.557298 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-rkjmm\" (UID: \"6f7daae1-d4e4-4396-b788-d49aef714ae4\") " pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.557325 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-rkjmm\" (UID: \"6f7daae1-d4e4-4396-b788-d49aef714ae4\") " pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.557370 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phdpq\" (UniqueName: \"kubernetes.io/projected/e048509e-80c3-4102-b7a6-bb3d30f06ec1-kube-api-access-phdpq\") pod \"barbican-api-5b8b564f66-hrmxb\" (UID: \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\") " pod="openstack/barbican-api-5b8b564f66-hrmxb" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.557385 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e048509e-80c3-4102-b7a6-bb3d30f06ec1-logs\") pod \"barbican-api-5b8b564f66-hrmxb\" (UID: \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\") " pod="openstack/barbican-api-5b8b564f66-hrmxb" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.557398 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e048509e-80c3-4102-b7a6-bb3d30f06ec1-combined-ca-bundle\") pod \"barbican-api-5b8b564f66-hrmxb\" (UID: \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\") " pod="openstack/barbican-api-5b8b564f66-hrmxb" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.560752 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-rkjmm\" (UID: \"6f7daae1-d4e4-4396-b788-d49aef714ae4\") " pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.561294 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-config\") pod \"dnsmasq-dns-85ff748b95-rkjmm\" (UID: \"6f7daae1-d4e4-4396-b788-d49aef714ae4\") " pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.562043 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-dns-svc\") pod \"dnsmasq-dns-85ff748b95-rkjmm\" (UID: \"6f7daae1-d4e4-4396-b788-d49aef714ae4\") " pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.562677 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-rkjmm\" (UID: \"6f7daae1-d4e4-4396-b788-d49aef714ae4\") " pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.563288 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-rkjmm\" (UID: \"6f7daae1-d4e4-4396-b788-d49aef714ae4\") " pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.584174 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-7dd8f4645d-ckwth" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.591446 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4w6cg\" (UniqueName: \"kubernetes.io/projected/6f7daae1-d4e4-4396-b788-d49aef714ae4-kube-api-access-4w6cg\") pod \"dnsmasq-dns-85ff748b95-rkjmm\" (UID: \"6f7daae1-d4e4-4396-b788-d49aef714ae4\") " pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.659919 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e048509e-80c3-4102-b7a6-bb3d30f06ec1-config-data\") pod \"barbican-api-5b8b564f66-hrmxb\" (UID: \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\") " pod="openstack/barbican-api-5b8b564f66-hrmxb" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.659987 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e048509e-80c3-4102-b7a6-bb3d30f06ec1-config-data-custom\") pod \"barbican-api-5b8b564f66-hrmxb\" (UID: \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\") " pod="openstack/barbican-api-5b8b564f66-hrmxb" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.660107 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phdpq\" (UniqueName: \"kubernetes.io/projected/e048509e-80c3-4102-b7a6-bb3d30f06ec1-kube-api-access-phdpq\") pod \"barbican-api-5b8b564f66-hrmxb\" (UID: \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\") " pod="openstack/barbican-api-5b8b564f66-hrmxb" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.660133 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e048509e-80c3-4102-b7a6-bb3d30f06ec1-logs\") pod \"barbican-api-5b8b564f66-hrmxb\" (UID: \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\") " pod="openstack/barbican-api-5b8b564f66-hrmxb" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.660154 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e048509e-80c3-4102-b7a6-bb3d30f06ec1-combined-ca-bundle\") pod \"barbican-api-5b8b564f66-hrmxb\" (UID: \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\") " pod="openstack/barbican-api-5b8b564f66-hrmxb" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.664826 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e048509e-80c3-4102-b7a6-bb3d30f06ec1-logs\") pod \"barbican-api-5b8b564f66-hrmxb\" (UID: \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\") " pod="openstack/barbican-api-5b8b564f66-hrmxb" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.665337 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e048509e-80c3-4102-b7a6-bb3d30f06ec1-combined-ca-bundle\") pod \"barbican-api-5b8b564f66-hrmxb\" (UID: \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\") " pod="openstack/barbican-api-5b8b564f66-hrmxb" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.668537 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e048509e-80c3-4102-b7a6-bb3d30f06ec1-config-data\") pod \"barbican-api-5b8b564f66-hrmxb\" (UID: \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\") " pod="openstack/barbican-api-5b8b564f66-hrmxb" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.680442 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e048509e-80c3-4102-b7a6-bb3d30f06ec1-config-data-custom\") pod \"barbican-api-5b8b564f66-hrmxb\" (UID: \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\") " pod="openstack/barbican-api-5b8b564f66-hrmxb" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.685542 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.686989 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phdpq\" (UniqueName: \"kubernetes.io/projected/e048509e-80c3-4102-b7a6-bb3d30f06ec1-kube-api-access-phdpq\") pod \"barbican-api-5b8b564f66-hrmxb\" (UID: \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\") " pod="openstack/barbican-api-5b8b564f66-hrmxb" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.696568 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b8b564f66-hrmxb" Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.927519 5014 generic.go:334] "Generic (PLEG): container finished" podID="37d803e4-987c-47c4-b9d6-de67dc94cc6a" containerID="762349f30bf58ad68e674890cdd67824557554f4a2ffa653f72548b18a7f4619" exitCode=0 Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.927567 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" event={"ID":"37d803e4-987c-47c4-b9d6-de67dc94cc6a","Type":"ContainerDied","Data":"762349f30bf58ad68e674890cdd67824557554f4a2ffa653f72548b18a7f4619"} Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.936759 5014 generic.go:334] "Generic (PLEG): container finished" podID="1688b2e2-1aaf-49e0-8414-0f12bb079aba" containerID="0ce85616c56a19ae32cdd705e5c4072145e7fad184d9161924b12988e54e9122" exitCode=0 Feb 28 04:54:03 crc kubenswrapper[5014]: I0228 04:54:03.937046 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-c9b9j" event={"ID":"1688b2e2-1aaf-49e0-8414-0f12bb079aba","Type":"ContainerDied","Data":"0ce85616c56a19ae32cdd705e5c4072145e7fad184d9161924b12988e54e9122"} Feb 28 04:54:04 crc kubenswrapper[5014]: I0228 04:54:04.520270 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-6cbc78cbb4-6wlp7" podUID="80e6122e-74aa-4ee6-a7a3-4af495cb55b7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 28 04:54:04 crc kubenswrapper[5014]: I0228 04:54:04.534316 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-c9c88866d-6m8lj" podUID="6ee56420-1b4d-4898-97db-d05756b9bb72" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.151:8443: connect: connection refused" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.211947 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-85ff55b8dd-q46np"] Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.213418 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.215889 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.216843 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.217166 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85ff55b8dd-q46np"] Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.321165 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c857c36-d78c-484b-a0b1-1cabf11c32a3-logs\") pod \"barbican-api-85ff55b8dd-q46np\" (UID: \"0c857c36-d78c-484b-a0b1-1cabf11c32a3\") " pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.321211 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c857c36-d78c-484b-a0b1-1cabf11c32a3-config-data-custom\") pod \"barbican-api-85ff55b8dd-q46np\" (UID: \"0c857c36-d78c-484b-a0b1-1cabf11c32a3\") " pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.321246 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c857c36-d78c-484b-a0b1-1cabf11c32a3-public-tls-certs\") pod \"barbican-api-85ff55b8dd-q46np\" (UID: \"0c857c36-d78c-484b-a0b1-1cabf11c32a3\") " pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.321302 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsmbq\" (UniqueName: \"kubernetes.io/projected/0c857c36-d78c-484b-a0b1-1cabf11c32a3-kube-api-access-jsmbq\") pod \"barbican-api-85ff55b8dd-q46np\" (UID: \"0c857c36-d78c-484b-a0b1-1cabf11c32a3\") " pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.321320 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c857c36-d78c-484b-a0b1-1cabf11c32a3-config-data\") pod \"barbican-api-85ff55b8dd-q46np\" (UID: \"0c857c36-d78c-484b-a0b1-1cabf11c32a3\") " pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.321360 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c857c36-d78c-484b-a0b1-1cabf11c32a3-internal-tls-certs\") pod \"barbican-api-85ff55b8dd-q46np\" (UID: \"0c857c36-d78c-484b-a0b1-1cabf11c32a3\") " pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.321388 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c857c36-d78c-484b-a0b1-1cabf11c32a3-combined-ca-bundle\") pod \"barbican-api-85ff55b8dd-q46np\" (UID: \"0c857c36-d78c-484b-a0b1-1cabf11c32a3\") " pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.423011 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c857c36-d78c-484b-a0b1-1cabf11c32a3-logs\") pod \"barbican-api-85ff55b8dd-q46np\" (UID: \"0c857c36-d78c-484b-a0b1-1cabf11c32a3\") " pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.423073 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c857c36-d78c-484b-a0b1-1cabf11c32a3-config-data-custom\") pod \"barbican-api-85ff55b8dd-q46np\" (UID: \"0c857c36-d78c-484b-a0b1-1cabf11c32a3\") " pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.423112 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c857c36-d78c-484b-a0b1-1cabf11c32a3-public-tls-certs\") pod \"barbican-api-85ff55b8dd-q46np\" (UID: \"0c857c36-d78c-484b-a0b1-1cabf11c32a3\") " pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.423188 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsmbq\" (UniqueName: \"kubernetes.io/projected/0c857c36-d78c-484b-a0b1-1cabf11c32a3-kube-api-access-jsmbq\") pod \"barbican-api-85ff55b8dd-q46np\" (UID: \"0c857c36-d78c-484b-a0b1-1cabf11c32a3\") " pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.423241 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c857c36-d78c-484b-a0b1-1cabf11c32a3-config-data\") pod \"barbican-api-85ff55b8dd-q46np\" (UID: \"0c857c36-d78c-484b-a0b1-1cabf11c32a3\") " pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.423281 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c857c36-d78c-484b-a0b1-1cabf11c32a3-internal-tls-certs\") pod \"barbican-api-85ff55b8dd-q46np\" (UID: \"0c857c36-d78c-484b-a0b1-1cabf11c32a3\") " pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.423335 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c857c36-d78c-484b-a0b1-1cabf11c32a3-combined-ca-bundle\") pod \"barbican-api-85ff55b8dd-q46np\" (UID: \"0c857c36-d78c-484b-a0b1-1cabf11c32a3\") " pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.424306 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0c857c36-d78c-484b-a0b1-1cabf11c32a3-logs\") pod \"barbican-api-85ff55b8dd-q46np\" (UID: \"0c857c36-d78c-484b-a0b1-1cabf11c32a3\") " pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.432479 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c857c36-d78c-484b-a0b1-1cabf11c32a3-internal-tls-certs\") pod \"barbican-api-85ff55b8dd-q46np\" (UID: \"0c857c36-d78c-484b-a0b1-1cabf11c32a3\") " pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.432611 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c857c36-d78c-484b-a0b1-1cabf11c32a3-combined-ca-bundle\") pod \"barbican-api-85ff55b8dd-q46np\" (UID: \"0c857c36-d78c-484b-a0b1-1cabf11c32a3\") " pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.439104 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsmbq\" (UniqueName: \"kubernetes.io/projected/0c857c36-d78c-484b-a0b1-1cabf11c32a3-kube-api-access-jsmbq\") pod \"barbican-api-85ff55b8dd-q46np\" (UID: \"0c857c36-d78c-484b-a0b1-1cabf11c32a3\") " pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.439688 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c857c36-d78c-484b-a0b1-1cabf11c32a3-public-tls-certs\") pod \"barbican-api-85ff55b8dd-q46np\" (UID: \"0c857c36-d78c-484b-a0b1-1cabf11c32a3\") " pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.442125 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0c857c36-d78c-484b-a0b1-1cabf11c32a3-config-data\") pod \"barbican-api-85ff55b8dd-q46np\" (UID: \"0c857c36-d78c-484b-a0b1-1cabf11c32a3\") " pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.444074 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0c857c36-d78c-484b-a0b1-1cabf11c32a3-config-data-custom\") pod \"barbican-api-85ff55b8dd-q46np\" (UID: \"0c857c36-d78c-484b-a0b1-1cabf11c32a3\") " pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:06 crc kubenswrapper[5014]: I0228 04:54:06.541339 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:07 crc kubenswrapper[5014]: I0228 04:54:07.363000 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" podUID="37d803e4-987c-47c4-b9d6-de67dc94cc6a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.156:5353: connect: connection refused" Feb 28 04:54:09 crc kubenswrapper[5014]: I0228 04:54:09.861028 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-c9b9j" Feb 28 04:54:09 crc kubenswrapper[5014]: I0228 04:54:09.884899 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537574-x6j2n" Feb 28 04:54:09 crc kubenswrapper[5014]: I0228 04:54:09.909245 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" Feb 28 04:54:09 crc kubenswrapper[5014]: I0228 04:54:09.986965 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-ovsdbserver-nb\") pod \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\" (UID: \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\") " Feb 28 04:54:09 crc kubenswrapper[5014]: I0228 04:54:09.987013 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1688b2e2-1aaf-49e0-8414-0f12bb079aba-scripts\") pod \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\" (UID: \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\") " Feb 28 04:54:09 crc kubenswrapper[5014]: I0228 04:54:09.987058 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1688b2e2-1aaf-49e0-8414-0f12bb079aba-db-sync-config-data\") pod \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\" (UID: \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\") " Feb 28 04:54:09 crc kubenswrapper[5014]: I0228 04:54:09.987103 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-config\") pod \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\" (UID: \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\") " Feb 28 04:54:09 crc kubenswrapper[5014]: I0228 04:54:09.987162 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1688b2e2-1aaf-49e0-8414-0f12bb079aba-combined-ca-bundle\") pod \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\" (UID: \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\") " Feb 28 04:54:09 crc kubenswrapper[5014]: I0228 04:54:09.987182 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-ovsdbserver-sb\") pod \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\" (UID: \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\") " Feb 28 04:54:09 crc kubenswrapper[5014]: I0228 04:54:09.987211 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1688b2e2-1aaf-49e0-8414-0f12bb079aba-etc-machine-id\") pod \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\" (UID: \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\") " Feb 28 04:54:09 crc kubenswrapper[5014]: I0228 04:54:09.987270 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgsxc\" (UniqueName: \"kubernetes.io/projected/5d33afd2-3959-4f00-8c82-1b46cb382721-kube-api-access-tgsxc\") pod \"5d33afd2-3959-4f00-8c82-1b46cb382721\" (UID: \"5d33afd2-3959-4f00-8c82-1b46cb382721\") " Feb 28 04:54:09 crc kubenswrapper[5014]: I0228 04:54:09.987317 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bpj4\" (UniqueName: \"kubernetes.io/projected/37d803e4-987c-47c4-b9d6-de67dc94cc6a-kube-api-access-6bpj4\") pod \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\" (UID: \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\") " Feb 28 04:54:09 crc kubenswrapper[5014]: I0228 04:54:09.987356 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-dns-swift-storage-0\") pod \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\" (UID: \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\") " Feb 28 04:54:09 crc kubenswrapper[5014]: I0228 04:54:09.987381 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1688b2e2-1aaf-49e0-8414-0f12bb079aba-config-data\") pod \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\" (UID: \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\") " Feb 28 04:54:09 crc kubenswrapper[5014]: I0228 04:54:09.987432 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-dns-svc\") pod \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\" (UID: \"37d803e4-987c-47c4-b9d6-de67dc94cc6a\") " Feb 28 04:54:09 crc kubenswrapper[5014]: I0228 04:54:09.987452 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cdd5\" (UniqueName: \"kubernetes.io/projected/1688b2e2-1aaf-49e0-8414-0f12bb079aba-kube-api-access-4cdd5\") pod \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\" (UID: \"1688b2e2-1aaf-49e0-8414-0f12bb079aba\") " Feb 28 04:54:09 crc kubenswrapper[5014]: I0228 04:54:09.990409 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1688b2e2-1aaf-49e0-8414-0f12bb079aba-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1688b2e2-1aaf-49e0-8414-0f12bb079aba" (UID: "1688b2e2-1aaf-49e0-8414-0f12bb079aba"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:54:09 crc kubenswrapper[5014]: I0228 04:54:09.994004 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" event={"ID":"37d803e4-987c-47c4-b9d6-de67dc94cc6a","Type":"ContainerDied","Data":"553bbc6f0af1382b94230df75c8bf1516916b01b3bfb723dbebaffbd4b94af38"} Feb 28 04:54:09 crc kubenswrapper[5014]: I0228 04:54:09.994065 5014 scope.go:117] "RemoveContainer" containerID="762349f30bf58ad68e674890cdd67824557554f4a2ffa653f72548b18a7f4619" Feb 28 04:54:09 crc kubenswrapper[5014]: I0228 04:54:09.994187 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-zgd7q" Feb 28 04:54:09 crc kubenswrapper[5014]: I0228 04:54:09.994668 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1688b2e2-1aaf-49e0-8414-0f12bb079aba-kube-api-access-4cdd5" (OuterVolumeSpecName: "kube-api-access-4cdd5") pod "1688b2e2-1aaf-49e0-8414-0f12bb079aba" (UID: "1688b2e2-1aaf-49e0-8414-0f12bb079aba"). InnerVolumeSpecName "kube-api-access-4cdd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:09.996965 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1688b2e2-1aaf-49e0-8414-0f12bb079aba-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "1688b2e2-1aaf-49e0-8414-0f12bb079aba" (UID: "1688b2e2-1aaf-49e0-8414-0f12bb079aba"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:09.998541 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d33afd2-3959-4f00-8c82-1b46cb382721-kube-api-access-tgsxc" (OuterVolumeSpecName: "kube-api-access-tgsxc") pod "5d33afd2-3959-4f00-8c82-1b46cb382721" (UID: "5d33afd2-3959-4f00-8c82-1b46cb382721"). InnerVolumeSpecName "kube-api-access-tgsxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:09.999851 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1688b2e2-1aaf-49e0-8414-0f12bb079aba-scripts" (OuterVolumeSpecName: "scripts") pod "1688b2e2-1aaf-49e0-8414-0f12bb079aba" (UID: "1688b2e2-1aaf-49e0-8414-0f12bb079aba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.004241 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37d803e4-987c-47c4-b9d6-de67dc94cc6a-kube-api-access-6bpj4" (OuterVolumeSpecName: "kube-api-access-6bpj4") pod "37d803e4-987c-47c4-b9d6-de67dc94cc6a" (UID: "37d803e4-987c-47c4-b9d6-de67dc94cc6a"). InnerVolumeSpecName "kube-api-access-6bpj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.005896 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-c9b9j" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.005971 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-c9b9j" event={"ID":"1688b2e2-1aaf-49e0-8414-0f12bb079aba","Type":"ContainerDied","Data":"a1ce457678cf30e76262305beb9d0a55f4f947d960f49e5ae53697abdbf26055"} Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.006015 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1ce457678cf30e76262305beb9d0a55f4f947d960f49e5ae53697abdbf26055" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.007772 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2d99d0c-9a87-4d80-8105-5c86158f6770","Type":"ContainerStarted","Data":"26dd67cef0700c1f97a6ae12163f2280e25f790cb69a3e7e22558e232cd6bd01"} Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.008030 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e2d99d0c-9a87-4d80-8105-5c86158f6770" containerName="ceilometer-central-agent" containerID="cri-o://5285b11c7f63ea45c0b337d406c4345d1d6dd50a696f0fc66b1291c91ecf9739" gracePeriod=30 Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.008122 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.008534 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e2d99d0c-9a87-4d80-8105-5c86158f6770" containerName="proxy-httpd" containerID="cri-o://26dd67cef0700c1f97a6ae12163f2280e25f790cb69a3e7e22558e232cd6bd01" gracePeriod=30 Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.008595 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e2d99d0c-9a87-4d80-8105-5c86158f6770" containerName="sg-core" containerID="cri-o://5c6b8b923b2e0f76fe1b4cbbe0d395ecdb86b62f7a043c552751f388c76968e3" gracePeriod=30 Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.008637 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e2d99d0c-9a87-4d80-8105-5c86158f6770" containerName="ceilometer-notification-agent" containerID="cri-o://df0a00a59040905d57860047e9263d3015a68c94ca415d4eb5741d25b71aefc0" gracePeriod=30 Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.015352 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537574-x6j2n" event={"ID":"5d33afd2-3959-4f00-8c82-1b46cb382721","Type":"ContainerDied","Data":"829c238fa2adaf50d8bdaaa8c41ceccd0c75f872cdf884ec58863ca241e6eee5"} Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.015390 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="829c238fa2adaf50d8bdaaa8c41ceccd0c75f872cdf884ec58863ca241e6eee5" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.015442 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537574-x6j2n" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.039653 5014 scope.go:117] "RemoveContainer" containerID="b33503c01f217bb8814daaed37a99363d79c788b917005ec40468867d5075f2a" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.044943 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.224986153 podStartE2EDuration="1m0.044922467s" podCreationTimestamp="2026-02-28 04:53:10 +0000 UTC" firstStartedPulling="2026-02-28 04:53:12.88092064 +0000 UTC m=+1181.551046540" lastFinishedPulling="2026-02-28 04:54:09.700856944 +0000 UTC m=+1238.370982854" observedRunningTime="2026-02-28 04:54:10.036272025 +0000 UTC m=+1238.706397935" watchObservedRunningTime="2026-02-28 04:54:10.044922467 +0000 UTC m=+1238.715048387" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.067711 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1688b2e2-1aaf-49e0-8414-0f12bb079aba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1688b2e2-1aaf-49e0-8414-0f12bb079aba" (UID: "1688b2e2-1aaf-49e0-8414-0f12bb079aba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.075371 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "37d803e4-987c-47c4-b9d6-de67dc94cc6a" (UID: "37d803e4-987c-47c4-b9d6-de67dc94cc6a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.083441 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-config" (OuterVolumeSpecName: "config") pod "37d803e4-987c-47c4-b9d6-de67dc94cc6a" (UID: "37d803e4-987c-47c4-b9d6-de67dc94cc6a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.084152 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "37d803e4-987c-47c4-b9d6-de67dc94cc6a" (UID: "37d803e4-987c-47c4-b9d6-de67dc94cc6a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.092998 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1688b2e2-1aaf-49e0-8414-0f12bb079aba-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.093021 5014 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/1688b2e2-1aaf-49e0-8414-0f12bb079aba-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.093030 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.093037 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1688b2e2-1aaf-49e0-8414-0f12bb079aba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.093045 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.093055 5014 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1688b2e2-1aaf-49e0-8414-0f12bb079aba-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.093064 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgsxc\" (UniqueName: \"kubernetes.io/projected/5d33afd2-3959-4f00-8c82-1b46cb382721-kube-api-access-tgsxc\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.093074 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bpj4\" (UniqueName: \"kubernetes.io/projected/37d803e4-987c-47c4-b9d6-de67dc94cc6a-kube-api-access-6bpj4\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.093081 5014 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.093090 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cdd5\" (UniqueName: \"kubernetes.io/projected/1688b2e2-1aaf-49e0-8414-0f12bb079aba-kube-api-access-4cdd5\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.104402 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "37d803e4-987c-47c4-b9d6-de67dc94cc6a" (UID: "37d803e4-987c-47c4-b9d6-de67dc94cc6a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.108835 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "37d803e4-987c-47c4-b9d6-de67dc94cc6a" (UID: "37d803e4-987c-47c4-b9d6-de67dc94cc6a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.111347 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1688b2e2-1aaf-49e0-8414-0f12bb079aba-config-data" (OuterVolumeSpecName: "config-data") pod "1688b2e2-1aaf-49e0-8414-0f12bb079aba" (UID: "1688b2e2-1aaf-49e0-8414-0f12bb079aba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.149266 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-76c688b599-br8wc"] Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.161730 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85ff55b8dd-q46np"] Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.168087 5014 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.194236 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1688b2e2-1aaf-49e0-8414-0f12bb079aba-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.194261 5014 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.194274 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/37d803e4-987c-47c4-b9d6-de67dc94cc6a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.195671 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5b8b564f66-hrmxb"] Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.329567 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-zgd7q"] Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.339979 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-zgd7q"] Feb 28 04:54:10 crc kubenswrapper[5014]: W0228 04:54:10.345439 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd8db062_b379_402e_a83b_291ee7e55bf1.slice/crio-965db12901f0fd64467cced06e96c1ed6878a075b34da9bc1be158c54ea62688 WatchSource:0}: Error finding container 965db12901f0fd64467cced06e96c1ed6878a075b34da9bc1be158c54ea62688: Status 404 returned error can't find the container with id 965db12901f0fd64467cced06e96c1ed6878a075b34da9bc1be158c54ea62688 Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.347091 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-7dd8f4645d-ckwth"] Feb 28 04:54:10 crc kubenswrapper[5014]: W0228 04:54:10.450305 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6f7daae1_d4e4_4396_b788_d49aef714ae4.slice/crio-87a724922dbe478223e7322587f0c91798bf8e1eafe1ab244fda47173cc4695a WatchSource:0}: Error finding container 87a724922dbe478223e7322587f0c91798bf8e1eafe1ab244fda47173cc4695a: Status 404 returned error can't find the container with id 87a724922dbe478223e7322587f0c91798bf8e1eafe1ab244fda47173cc4695a Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.453904 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-rkjmm"] Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.950224 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537568-f9lhv"] Feb 28 04:54:10 crc kubenswrapper[5014]: I0228 04:54:10.958844 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537568-f9lhv"] Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.041489 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-76c688b599-br8wc" event={"ID":"45715325-beb1-4639-bb3c-d466fc6e85ce","Type":"ContainerStarted","Data":"6a97f1ed2c69be68dd555764140ef0eac66ba92df99beb4e1a3ed0099e7ec723"} Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.044762 5014 generic.go:334] "Generic (PLEG): container finished" podID="e2d99d0c-9a87-4d80-8105-5c86158f6770" containerID="5c6b8b923b2e0f76fe1b4cbbe0d395ecdb86b62f7a043c552751f388c76968e3" exitCode=2 Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.044784 5014 generic.go:334] "Generic (PLEG): container finished" podID="e2d99d0c-9a87-4d80-8105-5c86158f6770" containerID="5285b11c7f63ea45c0b337d406c4345d1d6dd50a696f0fc66b1291c91ecf9739" exitCode=0 Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.044832 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2d99d0c-9a87-4d80-8105-5c86158f6770","Type":"ContainerDied","Data":"5c6b8b923b2e0f76fe1b4cbbe0d395ecdb86b62f7a043c552751f388c76968e3"} Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.044856 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2d99d0c-9a87-4d80-8105-5c86158f6770","Type":"ContainerDied","Data":"5285b11c7f63ea45c0b337d406c4345d1d6dd50a696f0fc66b1291c91ecf9739"} Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.047448 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85ff55b8dd-q46np" event={"ID":"0c857c36-d78c-484b-a0b1-1cabf11c32a3","Type":"ContainerStarted","Data":"b6a5e795cb35476b5fca604f9d7e228661a01283324e5b713092b7c5f0869514"} Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.053048 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.053103 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85ff55b8dd-q46np" event={"ID":"0c857c36-d78c-484b-a0b1-1cabf11c32a3","Type":"ContainerStarted","Data":"5c1806eef3771746a9074900d3c60405d6a491080e0b1ff59b32f37cb1cffe9f"} Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.053127 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.053141 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85ff55b8dd-q46np" event={"ID":"0c857c36-d78c-484b-a0b1-1cabf11c32a3","Type":"ContainerStarted","Data":"19e11183b4c5bfd90e1eaadd95d80d11bb931abdf771498f12b2eeca256c65b6"} Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.053155 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b8b564f66-hrmxb" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.053168 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b8b564f66-hrmxb" event={"ID":"e048509e-80c3-4102-b7a6-bb3d30f06ec1","Type":"ContainerStarted","Data":"37b2d115396c534df38db3022865a781542af116d7d950a4e5cafe5ae0a18697"} Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.053182 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b8b564f66-hrmxb" event={"ID":"e048509e-80c3-4102-b7a6-bb3d30f06ec1","Type":"ContainerStarted","Data":"aceabd4dc758e5ad5002c03cd24c4054af41007750634d96c9cab1edb321d655"} Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.053193 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b8b564f66-hrmxb" event={"ID":"e048509e-80c3-4102-b7a6-bb3d30f06ec1","Type":"ContainerStarted","Data":"5bcbc0efdd1ffe7725a6cbbc425df9c68e982a8aecd9b046c244e2c1f9e2530a"} Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.053205 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5b8b564f66-hrmxb" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.056306 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7dd8f4645d-ckwth" event={"ID":"bd8db062-b379-402e-a83b-291ee7e55bf1","Type":"ContainerStarted","Data":"965db12901f0fd64467cced06e96c1ed6878a075b34da9bc1be158c54ea62688"} Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.058301 5014 generic.go:334] "Generic (PLEG): container finished" podID="6f7daae1-d4e4-4396-b788-d49aef714ae4" containerID="521b76fd85643e2ac516d69737ac55e4708c689c8de195b8492ab55d7f033fe1" exitCode=0 Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.058362 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" event={"ID":"6f7daae1-d4e4-4396-b788-d49aef714ae4","Type":"ContainerDied","Data":"521b76fd85643e2ac516d69737ac55e4708c689c8de195b8492ab55d7f033fe1"} Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.058397 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" event={"ID":"6f7daae1-d4e4-4396-b788-d49aef714ae4","Type":"ContainerStarted","Data":"87a724922dbe478223e7322587f0c91798bf8e1eafe1ab244fda47173cc4695a"} Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.076033 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-85ff55b8dd-q46np" podStartSLOduration=5.076011896 podStartE2EDuration="5.076011896s" podCreationTimestamp="2026-02-28 04:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:54:11.068242978 +0000 UTC m=+1239.738368888" watchObservedRunningTime="2026-02-28 04:54:11.076011896 +0000 UTC m=+1239.746137806" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.115672 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5b8b564f66-hrmxb" podStartSLOduration=8.11565338 podStartE2EDuration="8.11565338s" podCreationTimestamp="2026-02-28 04:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:54:11.114354916 +0000 UTC m=+1239.784480836" watchObservedRunningTime="2026-02-28 04:54:11.11565338 +0000 UTC m=+1239.785779290" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.189355 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 04:54:11 crc kubenswrapper[5014]: E0228 04:54:11.189699 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d803e4-987c-47c4-b9d6-de67dc94cc6a" containerName="init" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.189710 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d803e4-987c-47c4-b9d6-de67dc94cc6a" containerName="init" Feb 28 04:54:11 crc kubenswrapper[5014]: E0228 04:54:11.189722 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1688b2e2-1aaf-49e0-8414-0f12bb079aba" containerName="cinder-db-sync" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.189728 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="1688b2e2-1aaf-49e0-8414-0f12bb079aba" containerName="cinder-db-sync" Feb 28 04:54:11 crc kubenswrapper[5014]: E0228 04:54:11.189737 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37d803e4-987c-47c4-b9d6-de67dc94cc6a" containerName="dnsmasq-dns" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.189743 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="37d803e4-987c-47c4-b9d6-de67dc94cc6a" containerName="dnsmasq-dns" Feb 28 04:54:11 crc kubenswrapper[5014]: E0228 04:54:11.189774 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d33afd2-3959-4f00-8c82-1b46cb382721" containerName="oc" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.189780 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d33afd2-3959-4f00-8c82-1b46cb382721" containerName="oc" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.189964 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="1688b2e2-1aaf-49e0-8414-0f12bb079aba" containerName="cinder-db-sync" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.189982 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="37d803e4-987c-47c4-b9d6-de67dc94cc6a" containerName="dnsmasq-dns" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.189998 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d33afd2-3959-4f00-8c82-1b46cb382721" containerName="oc" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.190841 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.195296 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.199460 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ck89z" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.199508 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.199850 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.203051 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.217992 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-config-data\") pod \"cinder-scheduler-0\" (UID: \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.218062 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njd49\" (UniqueName: \"kubernetes.io/projected/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-kube-api-access-njd49\") pod \"cinder-scheduler-0\" (UID: \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.218121 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.218162 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.218315 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.218365 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-scripts\") pod \"cinder-scheduler-0\" (UID: \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.276441 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-rkjmm"] Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.288266 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-zpvkp"] Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.289763 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.298661 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-zpvkp"] Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.322957 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.322997 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-scripts\") pod \"cinder-scheduler-0\" (UID: \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.323082 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-config-data\") pod \"cinder-scheduler-0\" (UID: \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.323104 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njd49\" (UniqueName: \"kubernetes.io/projected/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-kube-api-access-njd49\") pod \"cinder-scheduler-0\" (UID: \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.323134 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.323158 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.324193 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.326766 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.338012 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-config-data\") pod \"cinder-scheduler-0\" (UID: \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.341302 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.345251 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njd49\" (UniqueName: \"kubernetes.io/projected/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-kube-api-access-njd49\") pod \"cinder-scheduler-0\" (UID: \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.345858 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-scripts\") pod \"cinder-scheduler-0\" (UID: \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.426883 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-zpvkp\" (UID: \"3240ff52-33fc-4027-a9ea-f3e17780b320\") " pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.427315 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-zpvkp\" (UID: \"3240ff52-33fc-4027-a9ea-f3e17780b320\") " pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.427437 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58chh\" (UniqueName: \"kubernetes.io/projected/3240ff52-33fc-4027-a9ea-f3e17780b320-kube-api-access-58chh\") pod \"dnsmasq-dns-5c9776ccc5-zpvkp\" (UID: \"3240ff52-33fc-4027-a9ea-f3e17780b320\") " pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.427526 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-zpvkp\" (UID: \"3240ff52-33fc-4027-a9ea-f3e17780b320\") " pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.427611 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-config\") pod \"dnsmasq-dns-5c9776ccc5-zpvkp\" (UID: \"3240ff52-33fc-4027-a9ea-f3e17780b320\") " pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.427661 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-zpvkp\" (UID: \"3240ff52-33fc-4027-a9ea-f3e17780b320\") " pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.442070 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.443717 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.445640 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.463519 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.530118 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58chh\" (UniqueName: \"kubernetes.io/projected/3240ff52-33fc-4027-a9ea-f3e17780b320-kube-api-access-58chh\") pod \"dnsmasq-dns-5c9776ccc5-zpvkp\" (UID: \"3240ff52-33fc-4027-a9ea-f3e17780b320\") " pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.530219 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-zpvkp\" (UID: \"3240ff52-33fc-4027-a9ea-f3e17780b320\") " pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.530269 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-config\") pod \"dnsmasq-dns-5c9776ccc5-zpvkp\" (UID: \"3240ff52-33fc-4027-a9ea-f3e17780b320\") " pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.530338 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-zpvkp\" (UID: \"3240ff52-33fc-4027-a9ea-f3e17780b320\") " pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.530403 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-zpvkp\" (UID: \"3240ff52-33fc-4027-a9ea-f3e17780b320\") " pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.530446 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-zpvkp\" (UID: \"3240ff52-33fc-4027-a9ea-f3e17780b320\") " pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.531434 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-zpvkp\" (UID: \"3240ff52-33fc-4027-a9ea-f3e17780b320\") " pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.532883 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-zpvkp\" (UID: \"3240ff52-33fc-4027-a9ea-f3e17780b320\") " pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.533110 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-zpvkp\" (UID: \"3240ff52-33fc-4027-a9ea-f3e17780b320\") " pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.534191 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-zpvkp\" (UID: \"3240ff52-33fc-4027-a9ea-f3e17780b320\") " pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.535275 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-config\") pod \"dnsmasq-dns-5c9776ccc5-zpvkp\" (UID: \"3240ff52-33fc-4027-a9ea-f3e17780b320\") " pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.549136 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58chh\" (UniqueName: \"kubernetes.io/projected/3240ff52-33fc-4027-a9ea-f3e17780b320-kube-api-access-58chh\") pod \"dnsmasq-dns-5c9776ccc5-zpvkp\" (UID: \"3240ff52-33fc-4027-a9ea-f3e17780b320\") " pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.610322 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.635967 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.636572 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " pod="openstack/cinder-api-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.636626 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " pod="openstack/cinder-api-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.636656 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czdm8\" (UniqueName: \"kubernetes.io/projected/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-kube-api-access-czdm8\") pod \"cinder-api-0\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " pod="openstack/cinder-api-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.636691 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-logs\") pod \"cinder-api-0\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " pod="openstack/cinder-api-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.636735 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-scripts\") pod \"cinder-api-0\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " pod="openstack/cinder-api-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.636976 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-config-data-custom\") pod \"cinder-api-0\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " pod="openstack/cinder-api-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.637013 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-config-data\") pod \"cinder-api-0\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " pod="openstack/cinder-api-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.738751 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-config-data-custom\") pod \"cinder-api-0\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " pod="openstack/cinder-api-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.738828 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-config-data\") pod \"cinder-api-0\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " pod="openstack/cinder-api-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.738877 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " pod="openstack/cinder-api-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.738907 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " pod="openstack/cinder-api-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.738943 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czdm8\" (UniqueName: \"kubernetes.io/projected/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-kube-api-access-czdm8\") pod \"cinder-api-0\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " pod="openstack/cinder-api-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.738977 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-logs\") pod \"cinder-api-0\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " pod="openstack/cinder-api-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.739029 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-scripts\") pod \"cinder-api-0\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " pod="openstack/cinder-api-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.739350 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-etc-machine-id\") pod \"cinder-api-0\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " pod="openstack/cinder-api-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.739693 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-logs\") pod \"cinder-api-0\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " pod="openstack/cinder-api-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.742314 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-scripts\") pod \"cinder-api-0\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " pod="openstack/cinder-api-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.742764 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-config-data\") pod \"cinder-api-0\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " pod="openstack/cinder-api-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.743118 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-config-data-custom\") pod \"cinder-api-0\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " pod="openstack/cinder-api-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.757422 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czdm8\" (UniqueName: \"kubernetes.io/projected/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-kube-api-access-czdm8\") pod \"cinder-api-0\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " pod="openstack/cinder-api-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.758334 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " pod="openstack/cinder-api-0" Feb 28 04:54:11 crc kubenswrapper[5014]: I0228 04:54:11.770383 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 28 04:54:12 crc kubenswrapper[5014]: I0228 04:54:12.072780 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" event={"ID":"6f7daae1-d4e4-4396-b788-d49aef714ae4","Type":"ContainerStarted","Data":"7d51a64afde5eb3f19c1d35ac8b736c4808ff03283fc024a6b9a2bac8178b7f1"} Feb 28 04:54:12 crc kubenswrapper[5014]: I0228 04:54:12.073136 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" podUID="6f7daae1-d4e4-4396-b788-d49aef714ae4" containerName="dnsmasq-dns" containerID="cri-o://7d51a64afde5eb3f19c1d35ac8b736c4808ff03283fc024a6b9a2bac8178b7f1" gracePeriod=10 Feb 28 04:54:12 crc kubenswrapper[5014]: I0228 04:54:12.187239 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="337102fc-d918-4401-a98b-0903531566b9" path="/var/lib/kubelet/pods/337102fc-d918-4401-a98b-0903531566b9/volumes" Feb 28 04:54:12 crc kubenswrapper[5014]: I0228 04:54:12.188403 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37d803e4-987c-47c4-b9d6-de67dc94cc6a" path="/var/lib/kubelet/pods/37d803e4-987c-47c4-b9d6-de67dc94cc6a/volumes" Feb 28 04:54:12 crc kubenswrapper[5014]: I0228 04:54:12.222175 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" podStartSLOduration=9.222156584 podStartE2EDuration="9.222156584s" podCreationTimestamp="2026-02-28 04:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:54:12.108494134 +0000 UTC m=+1240.778620074" watchObservedRunningTime="2026-02-28 04:54:12.222156584 +0000 UTC m=+1240.892282494" Feb 28 04:54:12 crc kubenswrapper[5014]: I0228 04:54:12.793756 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" Feb 28 04:54:12 crc kubenswrapper[5014]: I0228 04:54:12.877834 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-zpvkp"] Feb 28 04:54:12 crc kubenswrapper[5014]: W0228 04:54:12.878846 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3240ff52_33fc_4027_a9ea_f3e17780b320.slice/crio-3f89b8eb178965465502cc97c18d675da85aad21aab3a5856a9565eabc9162cc WatchSource:0}: Error finding container 3f89b8eb178965465502cc97c18d675da85aad21aab3a5856a9565eabc9162cc: Status 404 returned error can't find the container with id 3f89b8eb178965465502cc97c18d675da85aad21aab3a5856a9565eabc9162cc Feb 28 04:54:12 crc kubenswrapper[5014]: I0228 04:54:12.969524 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-dns-swift-storage-0\") pod \"6f7daae1-d4e4-4396-b788-d49aef714ae4\" (UID: \"6f7daae1-d4e4-4396-b788-d49aef714ae4\") " Feb 28 04:54:12 crc kubenswrapper[5014]: I0228 04:54:12.969724 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4w6cg\" (UniqueName: \"kubernetes.io/projected/6f7daae1-d4e4-4396-b788-d49aef714ae4-kube-api-access-4w6cg\") pod \"6f7daae1-d4e4-4396-b788-d49aef714ae4\" (UID: \"6f7daae1-d4e4-4396-b788-d49aef714ae4\") " Feb 28 04:54:12 crc kubenswrapper[5014]: I0228 04:54:12.969947 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-config\") pod \"6f7daae1-d4e4-4396-b788-d49aef714ae4\" (UID: \"6f7daae1-d4e4-4396-b788-d49aef714ae4\") " Feb 28 04:54:12 crc kubenswrapper[5014]: I0228 04:54:12.970037 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-dns-svc\") pod \"6f7daae1-d4e4-4396-b788-d49aef714ae4\" (UID: \"6f7daae1-d4e4-4396-b788-d49aef714ae4\") " Feb 28 04:54:12 crc kubenswrapper[5014]: I0228 04:54:12.970112 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-ovsdbserver-nb\") pod \"6f7daae1-d4e4-4396-b788-d49aef714ae4\" (UID: \"6f7daae1-d4e4-4396-b788-d49aef714ae4\") " Feb 28 04:54:12 crc kubenswrapper[5014]: I0228 04:54:12.970266 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-ovsdbserver-sb\") pod \"6f7daae1-d4e4-4396-b788-d49aef714ae4\" (UID: \"6f7daae1-d4e4-4396-b788-d49aef714ae4\") " Feb 28 04:54:12 crc kubenswrapper[5014]: I0228 04:54:12.979155 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f7daae1-d4e4-4396-b788-d49aef714ae4-kube-api-access-4w6cg" (OuterVolumeSpecName: "kube-api-access-4w6cg") pod "6f7daae1-d4e4-4396-b788-d49aef714ae4" (UID: "6f7daae1-d4e4-4396-b788-d49aef714ae4"). InnerVolumeSpecName "kube-api-access-4w6cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.031021 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6f7daae1-d4e4-4396-b788-d49aef714ae4" (UID: "6f7daae1-d4e4-4396-b788-d49aef714ae4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.037012 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6f7daae1-d4e4-4396-b788-d49aef714ae4" (UID: "6f7daae1-d4e4-4396-b788-d49aef714ae4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.050802 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6f7daae1-d4e4-4396-b788-d49aef714ae4" (UID: "6f7daae1-d4e4-4396-b788-d49aef714ae4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:54:13 crc kubenswrapper[5014]: W0228 04:54:13.053173 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc65cd4a_e488_4a36_a861_18fddd2b9c6b.slice/crio-37e4118db6e3bf3cd19b9621d7c09b6ceac880705a8222b4e589cfd9e88d2ece WatchSource:0}: Error finding container 37e4118db6e3bf3cd19b9621d7c09b6ceac880705a8222b4e589cfd9e88d2ece: Status 404 returned error can't find the container with id 37e4118db6e3bf3cd19b9621d7c09b6ceac880705a8222b4e589cfd9e88d2ece Feb 28 04:54:13 crc kubenswrapper[5014]: W0228 04:54:13.057948 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod783dba5a_7aad_4820_82e2_6ecd3ff6d1d9.slice/crio-05881e5ef24f9d88169544b03eb9b9c054921340b5788d5cacf4cdb7fd688ffb WatchSource:0}: Error finding container 05881e5ef24f9d88169544b03eb9b9c054921340b5788d5cacf4cdb7fd688ffb: Status 404 returned error can't find the container with id 05881e5ef24f9d88169544b03eb9b9c054921340b5788d5cacf4cdb7fd688ffb Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.058152 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-config" (OuterVolumeSpecName: "config") pod "6f7daae1-d4e4-4396-b788-d49aef714ae4" (UID: "6f7daae1-d4e4-4396-b788-d49aef714ae4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.061317 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6f7daae1-d4e4-4396-b788-d49aef714ae4" (UID: "6f7daae1-d4e4-4396-b788-d49aef714ae4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.064101 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.072396 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.072414 5014 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.072425 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.072456 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.072490 5014 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6f7daae1-d4e4-4396-b788-d49aef714ae4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.072499 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4w6cg\" (UniqueName: \"kubernetes.io/projected/6f7daae1-d4e4-4396-b788-d49aef714ae4-kube-api-access-4w6cg\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.076020 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.082022 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bc65cd4a-e488-4a36-a861-18fddd2b9c6b","Type":"ContainerStarted","Data":"37e4118db6e3bf3cd19b9621d7c09b6ceac880705a8222b4e589cfd9e88d2ece"} Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.083572 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9","Type":"ContainerStarted","Data":"05881e5ef24f9d88169544b03eb9b9c054921340b5788d5cacf4cdb7fd688ffb"} Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.085193 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" event={"ID":"3240ff52-33fc-4027-a9ea-f3e17780b320","Type":"ContainerStarted","Data":"3f89b8eb178965465502cc97c18d675da85aad21aab3a5856a9565eabc9162cc"} Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.086926 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7dd8f4645d-ckwth" event={"ID":"bd8db062-b379-402e-a83b-291ee7e55bf1","Type":"ContainerStarted","Data":"a95b4d9c41b0854ef3d013b8da01d8ffbbdba22f392746bb2e54143c641dc805"} Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.086976 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-7dd8f4645d-ckwth" event={"ID":"bd8db062-b379-402e-a83b-291ee7e55bf1","Type":"ContainerStarted","Data":"5c2ad6845acaf424792fad85604abefceaf45e624d42b17dbced40a17b56795a"} Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.089697 5014 generic.go:334] "Generic (PLEG): container finished" podID="6f7daae1-d4e4-4396-b788-d49aef714ae4" containerID="7d51a64afde5eb3f19c1d35ac8b736c4808ff03283fc024a6b9a2bac8178b7f1" exitCode=0 Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.089738 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.089788 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" event={"ID":"6f7daae1-d4e4-4396-b788-d49aef714ae4","Type":"ContainerDied","Data":"7d51a64afde5eb3f19c1d35ac8b736c4808ff03283fc024a6b9a2bac8178b7f1"} Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.089841 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-rkjmm" event={"ID":"6f7daae1-d4e4-4396-b788-d49aef714ae4","Type":"ContainerDied","Data":"87a724922dbe478223e7322587f0c91798bf8e1eafe1ab244fda47173cc4695a"} Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.089862 5014 scope.go:117] "RemoveContainer" containerID="7d51a64afde5eb3f19c1d35ac8b736c4808ff03283fc024a6b9a2bac8178b7f1" Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.097228 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-76c688b599-br8wc" event={"ID":"45715325-beb1-4639-bb3c-d466fc6e85ce","Type":"ContainerStarted","Data":"050cb399c1a556c7d4fad6465e49feac2b0f62f2b87e6341c27ab201830e29d1"} Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.097270 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-76c688b599-br8wc" event={"ID":"45715325-beb1-4639-bb3c-d466fc6e85ce","Type":"ContainerStarted","Data":"e62491a92b17982197cdf20256bfa83adc129e18808a0fc049df2e8d110af429"} Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.108240 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-7dd8f4645d-ckwth" podStartSLOduration=8.208378149 podStartE2EDuration="10.108220782s" podCreationTimestamp="2026-02-28 04:54:03 +0000 UTC" firstStartedPulling="2026-02-28 04:54:10.349336766 +0000 UTC m=+1239.019462676" lastFinishedPulling="2026-02-28 04:54:12.249179399 +0000 UTC m=+1240.919305309" observedRunningTime="2026-02-28 04:54:13.107512992 +0000 UTC m=+1241.777638902" watchObservedRunningTime="2026-02-28 04:54:13.108220782 +0000 UTC m=+1241.778346692" Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.140181 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-76c688b599-br8wc" podStartSLOduration=8.037048041 podStartE2EDuration="10.140163379s" podCreationTimestamp="2026-02-28 04:54:03 +0000 UTC" firstStartedPulling="2026-02-28 04:54:10.167847196 +0000 UTC m=+1238.837973106" lastFinishedPulling="2026-02-28 04:54:12.270962534 +0000 UTC m=+1240.941088444" observedRunningTime="2026-02-28 04:54:13.125108445 +0000 UTC m=+1241.795234355" watchObservedRunningTime="2026-02-28 04:54:13.140163379 +0000 UTC m=+1241.810289289" Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.160938 5014 scope.go:117] "RemoveContainer" containerID="521b76fd85643e2ac516d69737ac55e4708c689c8de195b8492ab55d7f033fe1" Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.166794 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-rkjmm"] Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.181211 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-rkjmm"] Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.217162 5014 scope.go:117] "RemoveContainer" containerID="7d51a64afde5eb3f19c1d35ac8b736c4808ff03283fc024a6b9a2bac8178b7f1" Feb 28 04:54:13 crc kubenswrapper[5014]: E0228 04:54:13.217568 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d51a64afde5eb3f19c1d35ac8b736c4808ff03283fc024a6b9a2bac8178b7f1\": container with ID starting with 7d51a64afde5eb3f19c1d35ac8b736c4808ff03283fc024a6b9a2bac8178b7f1 not found: ID does not exist" containerID="7d51a64afde5eb3f19c1d35ac8b736c4808ff03283fc024a6b9a2bac8178b7f1" Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.217608 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d51a64afde5eb3f19c1d35ac8b736c4808ff03283fc024a6b9a2bac8178b7f1"} err="failed to get container status \"7d51a64afde5eb3f19c1d35ac8b736c4808ff03283fc024a6b9a2bac8178b7f1\": rpc error: code = NotFound desc = could not find container \"7d51a64afde5eb3f19c1d35ac8b736c4808ff03283fc024a6b9a2bac8178b7f1\": container with ID starting with 7d51a64afde5eb3f19c1d35ac8b736c4808ff03283fc024a6b9a2bac8178b7f1 not found: ID does not exist" Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.217643 5014 scope.go:117] "RemoveContainer" containerID="521b76fd85643e2ac516d69737ac55e4708c689c8de195b8492ab55d7f033fe1" Feb 28 04:54:13 crc kubenswrapper[5014]: E0228 04:54:13.218028 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"521b76fd85643e2ac516d69737ac55e4708c689c8de195b8492ab55d7f033fe1\": container with ID starting with 521b76fd85643e2ac516d69737ac55e4708c689c8de195b8492ab55d7f033fe1 not found: ID does not exist" containerID="521b76fd85643e2ac516d69737ac55e4708c689c8de195b8492ab55d7f033fe1" Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.218055 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"521b76fd85643e2ac516d69737ac55e4708c689c8de195b8492ab55d7f033fe1"} err="failed to get container status \"521b76fd85643e2ac516d69737ac55e4708c689c8de195b8492ab55d7f033fe1\": rpc error: code = NotFound desc = could not find container \"521b76fd85643e2ac516d69737ac55e4708c689c8de195b8492ab55d7f033fe1\": container with ID starting with 521b76fd85643e2ac516d69737ac55e4708c689c8de195b8492ab55d7f033fe1 not found: ID does not exist" Feb 28 04:54:13 crc kubenswrapper[5014]: I0228 04:54:13.218505 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 28 04:54:14 crc kubenswrapper[5014]: I0228 04:54:14.109040 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9","Type":"ContainerStarted","Data":"0de5545cd55257530d95ddb1f6c871c683d032adbb921d3ea257a0c14348e358"} Feb 28 04:54:14 crc kubenswrapper[5014]: I0228 04:54:14.114516 5014 generic.go:334] "Generic (PLEG): container finished" podID="3240ff52-33fc-4027-a9ea-f3e17780b320" containerID="25c781fbb0babf95b6fc112d6b43cc4a543b6dfd3421cdc41890b293fbd0e486" exitCode=0 Feb 28 04:54:14 crc kubenswrapper[5014]: I0228 04:54:14.114867 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" event={"ID":"3240ff52-33fc-4027-a9ea-f3e17780b320","Type":"ContainerDied","Data":"25c781fbb0babf95b6fc112d6b43cc4a543b6dfd3421cdc41890b293fbd0e486"} Feb 28 04:54:14 crc kubenswrapper[5014]: I0228 04:54:14.185833 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f7daae1-d4e4-4396-b788-d49aef714ae4" path="/var/lib/kubelet/pods/6f7daae1-d4e4-4396-b788-d49aef714ae4/volumes" Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.143021 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bc65cd4a-e488-4a36-a861-18fddd2b9c6b","Type":"ContainerStarted","Data":"a20e7366a8af755e55533b0c39e29b6920b49f5a2790abacba4d0146be917e6c"} Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.143596 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bc65cd4a-e488-4a36-a861-18fddd2b9c6b","Type":"ContainerStarted","Data":"c333ec1fe46efa9be128b113cedf9fbb11b771efe5408907379b5842279f7fd9"} Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.155491 5014 generic.go:334] "Generic (PLEG): container finished" podID="e2d99d0c-9a87-4d80-8105-5c86158f6770" containerID="df0a00a59040905d57860047e9263d3015a68c94ca415d4eb5741d25b71aefc0" exitCode=0 Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.155575 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2d99d0c-9a87-4d80-8105-5c86158f6770","Type":"ContainerDied","Data":"df0a00a59040905d57860047e9263d3015a68c94ca415d4eb5741d25b71aefc0"} Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.158085 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9","Type":"ContainerStarted","Data":"d4970f7d4123abcb554af9faf6769978c8f70e29bff86875b9ed26688150b2c1"} Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.158204 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="783dba5a-7aad-4820-82e2-6ecd3ff6d1d9" containerName="cinder-api-log" containerID="cri-o://0de5545cd55257530d95ddb1f6c871c683d032adbb921d3ea257a0c14348e358" gracePeriod=30 Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.158288 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.158318 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="783dba5a-7aad-4820-82e2-6ecd3ff6d1d9" containerName="cinder-api" containerID="cri-o://d4970f7d4123abcb554af9faf6769978c8f70e29bff86875b9ed26688150b2c1" gracePeriod=30 Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.164581 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" event={"ID":"3240ff52-33fc-4027-a9ea-f3e17780b320","Type":"ContainerStarted","Data":"129dec289e5b998d8987fabb65bda9b643da0fa333c464af7bd11e43f49b7fa3"} Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.164821 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.165602 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.445795676 podStartE2EDuration="4.165584652s" podCreationTimestamp="2026-02-28 04:54:11 +0000 UTC" firstStartedPulling="2026-02-28 04:54:13.055834376 +0000 UTC m=+1241.725960286" lastFinishedPulling="2026-02-28 04:54:13.775623352 +0000 UTC m=+1242.445749262" observedRunningTime="2026-02-28 04:54:15.158845412 +0000 UTC m=+1243.828971312" watchObservedRunningTime="2026-02-28 04:54:15.165584652 +0000 UTC m=+1243.835710562" Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.179140 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.179121345 podStartE2EDuration="4.179121345s" podCreationTimestamp="2026-02-28 04:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:54:15.175331373 +0000 UTC m=+1243.845457283" watchObservedRunningTime="2026-02-28 04:54:15.179121345 +0000 UTC m=+1243.849247255" Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.715177 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.715538 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.715595 5014 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.716395 5014 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9fe0724568f1359a83a127eb5109a6ee8f87dacb3ce893d1b36328a0a6724e45"} pod="openshift-machine-config-operator/machine-config-daemon-cct62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.716462 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" containerID="cri-o://9fe0724568f1359a83a127eb5109a6ee8f87dacb3ce893d1b36328a0a6724e45" gracePeriod=600 Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.808400 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.831540 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" podStartSLOduration=4.831523973 podStartE2EDuration="4.831523973s" podCreationTimestamp="2026-02-28 04:54:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:54:15.197054456 +0000 UTC m=+1243.867180356" watchObservedRunningTime="2026-02-28 04:54:15.831523973 +0000 UTC m=+1244.501649883" Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.925984 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-logs\") pod \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.926100 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czdm8\" (UniqueName: \"kubernetes.io/projected/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-kube-api-access-czdm8\") pod \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.926344 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-combined-ca-bundle\") pod \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.926407 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-config-data-custom\") pod \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.926457 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-scripts\") pod \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.926485 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-etc-machine-id\") pod \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.926513 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-config-data\") pod \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\" (UID: \"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9\") " Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.926576 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "783dba5a-7aad-4820-82e2-6ecd3ff6d1d9" (UID: "783dba5a-7aad-4820-82e2-6ecd3ff6d1d9"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.926712 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-logs" (OuterVolumeSpecName: "logs") pod "783dba5a-7aad-4820-82e2-6ecd3ff6d1d9" (UID: "783dba5a-7aad-4820-82e2-6ecd3ff6d1d9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.927227 5014 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.927240 5014 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-logs\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.956506 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "783dba5a-7aad-4820-82e2-6ecd3ff6d1d9" (UID: "783dba5a-7aad-4820-82e2-6ecd3ff6d1d9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.956863 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-kube-api-access-czdm8" (OuterVolumeSpecName: "kube-api-access-czdm8") pod "783dba5a-7aad-4820-82e2-6ecd3ff6d1d9" (UID: "783dba5a-7aad-4820-82e2-6ecd3ff6d1d9"). InnerVolumeSpecName "kube-api-access-czdm8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.956889 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-scripts" (OuterVolumeSpecName: "scripts") pod "783dba5a-7aad-4820-82e2-6ecd3ff6d1d9" (UID: "783dba5a-7aad-4820-82e2-6ecd3ff6d1d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.966376 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "783dba5a-7aad-4820-82e2-6ecd3ff6d1d9" (UID: "783dba5a-7aad-4820-82e2-6ecd3ff6d1d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:15 crc kubenswrapper[5014]: I0228 04:54:15.993961 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-config-data" (OuterVolumeSpecName: "config-data") pod "783dba5a-7aad-4820-82e2-6ecd3ff6d1d9" (UID: "783dba5a-7aad-4820-82e2-6ecd3ff6d1d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.029637 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czdm8\" (UniqueName: \"kubernetes.io/projected/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-kube-api-access-czdm8\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.029675 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.029684 5014 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.029694 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.029704 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.174009 5014 generic.go:334] "Generic (PLEG): container finished" podID="783dba5a-7aad-4820-82e2-6ecd3ff6d1d9" containerID="d4970f7d4123abcb554af9faf6769978c8f70e29bff86875b9ed26688150b2c1" exitCode=0 Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.174040 5014 generic.go:334] "Generic (PLEG): container finished" podID="783dba5a-7aad-4820-82e2-6ecd3ff6d1d9" containerID="0de5545cd55257530d95ddb1f6c871c683d032adbb921d3ea257a0c14348e358" exitCode=143 Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.174125 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.180320 5014 generic.go:334] "Generic (PLEG): container finished" podID="6aad0009-d904-48f8-8e30-82205907ece1" containerID="9fe0724568f1359a83a127eb5109a6ee8f87dacb3ce893d1b36328a0a6724e45" exitCode=0 Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.186124 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9","Type":"ContainerDied","Data":"d4970f7d4123abcb554af9faf6769978c8f70e29bff86875b9ed26688150b2c1"} Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.186161 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9","Type":"ContainerDied","Data":"0de5545cd55257530d95ddb1f6c871c683d032adbb921d3ea257a0c14348e358"} Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.186172 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"783dba5a-7aad-4820-82e2-6ecd3ff6d1d9","Type":"ContainerDied","Data":"05881e5ef24f9d88169544b03eb9b9c054921340b5788d5cacf4cdb7fd688ffb"} Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.186182 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerDied","Data":"9fe0724568f1359a83a127eb5109a6ee8f87dacb3ce893d1b36328a0a6724e45"} Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.186201 5014 scope.go:117] "RemoveContainer" containerID="d4970f7d4123abcb554af9faf6769978c8f70e29bff86875b9ed26688150b2c1" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.247850 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.252527 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.275391 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 28 04:54:16 crc kubenswrapper[5014]: E0228 04:54:16.275832 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f7daae1-d4e4-4396-b788-d49aef714ae4" containerName="dnsmasq-dns" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.275850 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f7daae1-d4e4-4396-b788-d49aef714ae4" containerName="dnsmasq-dns" Feb 28 04:54:16 crc kubenswrapper[5014]: E0228 04:54:16.275869 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="783dba5a-7aad-4820-82e2-6ecd3ff6d1d9" containerName="cinder-api" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.275876 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="783dba5a-7aad-4820-82e2-6ecd3ff6d1d9" containerName="cinder-api" Feb 28 04:54:16 crc kubenswrapper[5014]: E0228 04:54:16.275890 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f7daae1-d4e4-4396-b788-d49aef714ae4" containerName="init" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.275896 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f7daae1-d4e4-4396-b788-d49aef714ae4" containerName="init" Feb 28 04:54:16 crc kubenswrapper[5014]: E0228 04:54:16.275920 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="783dba5a-7aad-4820-82e2-6ecd3ff6d1d9" containerName="cinder-api-log" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.275927 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="783dba5a-7aad-4820-82e2-6ecd3ff6d1d9" containerName="cinder-api-log" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.276110 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f7daae1-d4e4-4396-b788-d49aef714ae4" containerName="dnsmasq-dns" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.276132 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="783dba5a-7aad-4820-82e2-6ecd3ff6d1d9" containerName="cinder-api-log" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.276141 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="783dba5a-7aad-4820-82e2-6ecd3ff6d1d9" containerName="cinder-api" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.277251 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.282006 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.282202 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.285622 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.294947 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.435979 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89500e11-205d-40a6-ba7b-54b76ec65b69-scripts\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.436064 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89500e11-205d-40a6-ba7b-54b76ec65b69-config-data-custom\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.436116 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89500e11-205d-40a6-ba7b-54b76ec65b69-config-data\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.436138 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89500e11-205d-40a6-ba7b-54b76ec65b69-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.436157 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vmt9\" (UniqueName: \"kubernetes.io/projected/89500e11-205d-40a6-ba7b-54b76ec65b69-kube-api-access-8vmt9\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.436190 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89500e11-205d-40a6-ba7b-54b76ec65b69-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.436213 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89500e11-205d-40a6-ba7b-54b76ec65b69-logs\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.436239 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/89500e11-205d-40a6-ba7b-54b76ec65b69-etc-machine-id\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.436276 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89500e11-205d-40a6-ba7b-54b76ec65b69-public-tls-certs\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.537782 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89500e11-205d-40a6-ba7b-54b76ec65b69-config-data-custom\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.537891 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89500e11-205d-40a6-ba7b-54b76ec65b69-config-data\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.537924 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89500e11-205d-40a6-ba7b-54b76ec65b69-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.537944 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vmt9\" (UniqueName: \"kubernetes.io/projected/89500e11-205d-40a6-ba7b-54b76ec65b69-kube-api-access-8vmt9\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.537990 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89500e11-205d-40a6-ba7b-54b76ec65b69-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.538027 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89500e11-205d-40a6-ba7b-54b76ec65b69-logs\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.538055 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/89500e11-205d-40a6-ba7b-54b76ec65b69-etc-machine-id\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.538116 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89500e11-205d-40a6-ba7b-54b76ec65b69-public-tls-certs\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.538157 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89500e11-205d-40a6-ba7b-54b76ec65b69-scripts\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.539782 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/89500e11-205d-40a6-ba7b-54b76ec65b69-logs\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.540230 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/89500e11-205d-40a6-ba7b-54b76ec65b69-etc-machine-id\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.543333 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89500e11-205d-40a6-ba7b-54b76ec65b69-scripts\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.543940 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89500e11-205d-40a6-ba7b-54b76ec65b69-config-data-custom\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.546701 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89500e11-205d-40a6-ba7b-54b76ec65b69-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.549160 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89500e11-205d-40a6-ba7b-54b76ec65b69-config-data\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.549357 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/89500e11-205d-40a6-ba7b-54b76ec65b69-public-tls-certs\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.549494 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/89500e11-205d-40a6-ba7b-54b76ec65b69-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.557290 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vmt9\" (UniqueName: \"kubernetes.io/projected/89500e11-205d-40a6-ba7b-54b76ec65b69-kube-api-access-8vmt9\") pod \"cinder-api-0\" (UID: \"89500e11-205d-40a6-ba7b-54b76ec65b69\") " pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.598319 5014 scope.go:117] "RemoveContainer" containerID="0de5545cd55257530d95ddb1f6c871c683d032adbb921d3ea257a0c14348e358" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.609912 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.610406 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.619009 5014 scope.go:117] "RemoveContainer" containerID="d4970f7d4123abcb554af9faf6769978c8f70e29bff86875b9ed26688150b2c1" Feb 28 04:54:16 crc kubenswrapper[5014]: E0228 04:54:16.620933 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4970f7d4123abcb554af9faf6769978c8f70e29bff86875b9ed26688150b2c1\": container with ID starting with d4970f7d4123abcb554af9faf6769978c8f70e29bff86875b9ed26688150b2c1 not found: ID does not exist" containerID="d4970f7d4123abcb554af9faf6769978c8f70e29bff86875b9ed26688150b2c1" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.620975 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4970f7d4123abcb554af9faf6769978c8f70e29bff86875b9ed26688150b2c1"} err="failed to get container status \"d4970f7d4123abcb554af9faf6769978c8f70e29bff86875b9ed26688150b2c1\": rpc error: code = NotFound desc = could not find container \"d4970f7d4123abcb554af9faf6769978c8f70e29bff86875b9ed26688150b2c1\": container with ID starting with d4970f7d4123abcb554af9faf6769978c8f70e29bff86875b9ed26688150b2c1 not found: ID does not exist" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.621022 5014 scope.go:117] "RemoveContainer" containerID="0de5545cd55257530d95ddb1f6c871c683d032adbb921d3ea257a0c14348e358" Feb 28 04:54:16 crc kubenswrapper[5014]: E0228 04:54:16.624912 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0de5545cd55257530d95ddb1f6c871c683d032adbb921d3ea257a0c14348e358\": container with ID starting with 0de5545cd55257530d95ddb1f6c871c683d032adbb921d3ea257a0c14348e358 not found: ID does not exist" containerID="0de5545cd55257530d95ddb1f6c871c683d032adbb921d3ea257a0c14348e358" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.624952 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0de5545cd55257530d95ddb1f6c871c683d032adbb921d3ea257a0c14348e358"} err="failed to get container status \"0de5545cd55257530d95ddb1f6c871c683d032adbb921d3ea257a0c14348e358\": rpc error: code = NotFound desc = could not find container \"0de5545cd55257530d95ddb1f6c871c683d032adbb921d3ea257a0c14348e358\": container with ID starting with 0de5545cd55257530d95ddb1f6c871c683d032adbb921d3ea257a0c14348e358 not found: ID does not exist" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.624976 5014 scope.go:117] "RemoveContainer" containerID="d4970f7d4123abcb554af9faf6769978c8f70e29bff86875b9ed26688150b2c1" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.626888 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4970f7d4123abcb554af9faf6769978c8f70e29bff86875b9ed26688150b2c1"} err="failed to get container status \"d4970f7d4123abcb554af9faf6769978c8f70e29bff86875b9ed26688150b2c1\": rpc error: code = NotFound desc = could not find container \"d4970f7d4123abcb554af9faf6769978c8f70e29bff86875b9ed26688150b2c1\": container with ID starting with d4970f7d4123abcb554af9faf6769978c8f70e29bff86875b9ed26688150b2c1 not found: ID does not exist" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.626919 5014 scope.go:117] "RemoveContainer" containerID="0de5545cd55257530d95ddb1f6c871c683d032adbb921d3ea257a0c14348e358" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.627577 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0de5545cd55257530d95ddb1f6c871c683d032adbb921d3ea257a0c14348e358"} err="failed to get container status \"0de5545cd55257530d95ddb1f6c871c683d032adbb921d3ea257a0c14348e358\": rpc error: code = NotFound desc = could not find container \"0de5545cd55257530d95ddb1f6c871c683d032adbb921d3ea257a0c14348e358\": container with ID starting with 0de5545cd55257530d95ddb1f6c871c683d032adbb921d3ea257a0c14348e358 not found: ID does not exist" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.627628 5014 scope.go:117] "RemoveContainer" containerID="cf1c2df486dbe48ee5a602ed54854b395ec2709d14f3810f6a23ce669b21c259" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.821947 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:54:16 crc kubenswrapper[5014]: I0228 04:54:16.863380 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:54:17 crc kubenswrapper[5014]: I0228 04:54:17.143993 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 28 04:54:17 crc kubenswrapper[5014]: I0228 04:54:17.208949 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"89500e11-205d-40a6-ba7b-54b76ec65b69","Type":"ContainerStarted","Data":"bc0aadce44b180db2c562b77c5f33cb2e92d8d033417eb85e2d3171db02f72c9"} Feb 28 04:54:17 crc kubenswrapper[5014]: I0228 04:54:17.221902 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerStarted","Data":"a1900058ed5d5055efcaa8b7a5a928b3456052935d481ae9dedaea0c3e448c54"} Feb 28 04:54:18 crc kubenswrapper[5014]: I0228 04:54:18.184483 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="783dba5a-7aad-4820-82e2-6ecd3ff6d1d9" path="/var/lib/kubelet/pods/783dba5a-7aad-4820-82e2-6ecd3ff6d1d9/volumes" Feb 28 04:54:18 crc kubenswrapper[5014]: I0228 04:54:18.186466 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:18 crc kubenswrapper[5014]: I0228 04:54:18.267621 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"89500e11-205d-40a6-ba7b-54b76ec65b69","Type":"ContainerStarted","Data":"71804bcc34e30164b798310c06b04ceb78f54a553a0f50bbc5527d31750d04ee"} Feb 28 04:54:18 crc kubenswrapper[5014]: I0228 04:54:18.428940 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-85ff55b8dd-q46np" Feb 28 04:54:18 crc kubenswrapper[5014]: I0228 04:54:18.522185 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5b8b564f66-hrmxb"] Feb 28 04:54:18 crc kubenswrapper[5014]: I0228 04:54:18.522500 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5b8b564f66-hrmxb" podUID="e048509e-80c3-4102-b7a6-bb3d30f06ec1" containerName="barbican-api-log" containerID="cri-o://aceabd4dc758e5ad5002c03cd24c4054af41007750634d96c9cab1edb321d655" gracePeriod=30 Feb 28 04:54:18 crc kubenswrapper[5014]: I0228 04:54:18.522591 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5b8b564f66-hrmxb" podUID="e048509e-80c3-4102-b7a6-bb3d30f06ec1" containerName="barbican-api" containerID="cri-o://37b2d115396c534df38db3022865a781542af116d7d950a4e5cafe5ae0a18697" gracePeriod=30 Feb 28 04:54:18 crc kubenswrapper[5014]: I0228 04:54:18.566467 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b8b564f66-hrmxb" podUID="e048509e-80c3-4102-b7a6-bb3d30f06ec1" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": EOF" Feb 28 04:54:18 crc kubenswrapper[5014]: I0228 04:54:18.804918 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-c9c88866d-6m8lj" Feb 28 04:54:18 crc kubenswrapper[5014]: I0228 04:54:18.923677 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6cbc78cbb4-6wlp7"] Feb 28 04:54:18 crc kubenswrapper[5014]: I0228 04:54:18.923928 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6cbc78cbb4-6wlp7" podUID="80e6122e-74aa-4ee6-a7a3-4af495cb55b7" containerName="horizon-log" containerID="cri-o://76ed047bf90263787959b88328e36777c017c0f8dd1ff494685dddd105e6d8cd" gracePeriod=30 Feb 28 04:54:18 crc kubenswrapper[5014]: I0228 04:54:18.924349 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-6cbc78cbb4-6wlp7" podUID="80e6122e-74aa-4ee6-a7a3-4af495cb55b7" containerName="horizon" containerID="cri-o://e0ca2cc31bef32f1a8996357e09afc4440944891b9575e4c249702b104fa3fa9" gracePeriod=30 Feb 28 04:54:19 crc kubenswrapper[5014]: I0228 04:54:19.027064 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6cbc78cbb4-6wlp7" podUID="80e6122e-74aa-4ee6-a7a3-4af495cb55b7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Feb 28 04:54:19 crc kubenswrapper[5014]: I0228 04:54:19.277170 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"89500e11-205d-40a6-ba7b-54b76ec65b69","Type":"ContainerStarted","Data":"9ebab7fde555fa9247067a92e6c30152f1be4ad241d7325e39d08698e8858ebb"} Feb 28 04:54:19 crc kubenswrapper[5014]: I0228 04:54:19.278406 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 28 04:54:19 crc kubenswrapper[5014]: I0228 04:54:19.280930 5014 generic.go:334] "Generic (PLEG): container finished" podID="e048509e-80c3-4102-b7a6-bb3d30f06ec1" containerID="aceabd4dc758e5ad5002c03cd24c4054af41007750634d96c9cab1edb321d655" exitCode=143 Feb 28 04:54:19 crc kubenswrapper[5014]: I0228 04:54:19.280973 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b8b564f66-hrmxb" event={"ID":"e048509e-80c3-4102-b7a6-bb3d30f06ec1","Type":"ContainerDied","Data":"aceabd4dc758e5ad5002c03cd24c4054af41007750634d96c9cab1edb321d655"} Feb 28 04:54:19 crc kubenswrapper[5014]: I0228 04:54:19.304073 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.304053899 podStartE2EDuration="3.304053899s" podCreationTimestamp="2026-02-28 04:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:54:19.297676228 +0000 UTC m=+1247.967802138" watchObservedRunningTime="2026-02-28 04:54:19.304053899 +0000 UTC m=+1247.974179809" Feb 28 04:54:20 crc kubenswrapper[5014]: I0228 04:54:20.316058 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5b8b564f66-hrmxb" Feb 28 04:54:21 crc kubenswrapper[5014]: I0228 04:54:21.637981 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" Feb 28 04:54:21 crc kubenswrapper[5014]: I0228 04:54:21.712945 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-sv6qs"] Feb 28 04:54:21 crc kubenswrapper[5014]: I0228 04:54:21.713854 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" podUID="651265e8-74ac-412e-a823-a7e19b2c04b6" containerName="dnsmasq-dns" containerID="cri-o://55a544d313216d8183984f8ef62ce60d0445fdc4c04a104b1b368cea381a6fba" gracePeriod=10 Feb 28 04:54:21 crc kubenswrapper[5014]: I0228 04:54:21.879153 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 28 04:54:21 crc kubenswrapper[5014]: I0228 04:54:21.951020 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.314115 5014 generic.go:334] "Generic (PLEG): container finished" podID="651265e8-74ac-412e-a823-a7e19b2c04b6" containerID="55a544d313216d8183984f8ef62ce60d0445fdc4c04a104b1b368cea381a6fba" exitCode=0 Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.314232 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" event={"ID":"651265e8-74ac-412e-a823-a7e19b2c04b6","Type":"ContainerDied","Data":"55a544d313216d8183984f8ef62ce60d0445fdc4c04a104b1b368cea381a6fba"} Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.314301 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" event={"ID":"651265e8-74ac-412e-a823-a7e19b2c04b6","Type":"ContainerDied","Data":"b159b17ef63f98fd4b0150912c8a76584e5f1ceec75238b366613cbf6ba11f2f"} Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.314322 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b159b17ef63f98fd4b0150912c8a76584e5f1ceec75238b366613cbf6ba11f2f" Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.314424 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="bc65cd4a-e488-4a36-a861-18fddd2b9c6b" containerName="probe" containerID="cri-o://a20e7366a8af755e55533b0c39e29b6920b49f5a2790abacba4d0146be917e6c" gracePeriod=30 Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.314449 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="bc65cd4a-e488-4a36-a861-18fddd2b9c6b" containerName="cinder-scheduler" containerID="cri-o://c333ec1fe46efa9be128b113cedf9fbb11b771efe5408907379b5842279f7fd9" gracePeriod=30 Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.329479 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.355015 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6cbc78cbb4-6wlp7" podUID="80e6122e-74aa-4ee6-a7a3-4af495cb55b7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:47348->10.217.0.150:8443: read: connection reset by peer" Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.458167 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4ld2l\" (UniqueName: \"kubernetes.io/projected/651265e8-74ac-412e-a823-a7e19b2c04b6-kube-api-access-4ld2l\") pod \"651265e8-74ac-412e-a823-a7e19b2c04b6\" (UID: \"651265e8-74ac-412e-a823-a7e19b2c04b6\") " Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.458270 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-dns-swift-storage-0\") pod \"651265e8-74ac-412e-a823-a7e19b2c04b6\" (UID: \"651265e8-74ac-412e-a823-a7e19b2c04b6\") " Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.458320 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-dns-svc\") pod \"651265e8-74ac-412e-a823-a7e19b2c04b6\" (UID: \"651265e8-74ac-412e-a823-a7e19b2c04b6\") " Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.458369 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-ovsdbserver-nb\") pod \"651265e8-74ac-412e-a823-a7e19b2c04b6\" (UID: \"651265e8-74ac-412e-a823-a7e19b2c04b6\") " Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.458411 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-config\") pod \"651265e8-74ac-412e-a823-a7e19b2c04b6\" (UID: \"651265e8-74ac-412e-a823-a7e19b2c04b6\") " Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.458514 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-ovsdbserver-sb\") pod \"651265e8-74ac-412e-a823-a7e19b2c04b6\" (UID: \"651265e8-74ac-412e-a823-a7e19b2c04b6\") " Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.466312 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/651265e8-74ac-412e-a823-a7e19b2c04b6-kube-api-access-4ld2l" (OuterVolumeSpecName: "kube-api-access-4ld2l") pod "651265e8-74ac-412e-a823-a7e19b2c04b6" (UID: "651265e8-74ac-412e-a823-a7e19b2c04b6"). InnerVolumeSpecName "kube-api-access-4ld2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.516369 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-config" (OuterVolumeSpecName: "config") pod "651265e8-74ac-412e-a823-a7e19b2c04b6" (UID: "651265e8-74ac-412e-a823-a7e19b2c04b6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.521194 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "651265e8-74ac-412e-a823-a7e19b2c04b6" (UID: "651265e8-74ac-412e-a823-a7e19b2c04b6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.532548 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "651265e8-74ac-412e-a823-a7e19b2c04b6" (UID: "651265e8-74ac-412e-a823-a7e19b2c04b6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.539583 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "651265e8-74ac-412e-a823-a7e19b2c04b6" (UID: "651265e8-74ac-412e-a823-a7e19b2c04b6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.542861 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "651265e8-74ac-412e-a823-a7e19b2c04b6" (UID: "651265e8-74ac-412e-a823-a7e19b2c04b6"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.561036 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4ld2l\" (UniqueName: \"kubernetes.io/projected/651265e8-74ac-412e-a823-a7e19b2c04b6-kube-api-access-4ld2l\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.561102 5014 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.561116 5014 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.561127 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.561136 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:22 crc kubenswrapper[5014]: I0228 04:54:22.561159 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/651265e8-74ac-412e-a823-a7e19b2c04b6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:23 crc kubenswrapper[5014]: I0228 04:54:23.324086 5014 generic.go:334] "Generic (PLEG): container finished" podID="bc65cd4a-e488-4a36-a861-18fddd2b9c6b" containerID="a20e7366a8af755e55533b0c39e29b6920b49f5a2790abacba4d0146be917e6c" exitCode=0 Feb 28 04:54:23 crc kubenswrapper[5014]: I0228 04:54:23.324192 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bc65cd4a-e488-4a36-a861-18fddd2b9c6b","Type":"ContainerDied","Data":"a20e7366a8af755e55533b0c39e29b6920b49f5a2790abacba4d0146be917e6c"} Feb 28 04:54:23 crc kubenswrapper[5014]: I0228 04:54:23.327081 5014 generic.go:334] "Generic (PLEG): container finished" podID="80e6122e-74aa-4ee6-a7a3-4af495cb55b7" containerID="e0ca2cc31bef32f1a8996357e09afc4440944891b9575e4c249702b104fa3fa9" exitCode=0 Feb 28 04:54:23 crc kubenswrapper[5014]: I0228 04:54:23.327103 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cbc78cbb4-6wlp7" event={"ID":"80e6122e-74aa-4ee6-a7a3-4af495cb55b7","Type":"ContainerDied","Data":"e0ca2cc31bef32f1a8996357e09afc4440944891b9575e4c249702b104fa3fa9"} Feb 28 04:54:23 crc kubenswrapper[5014]: I0228 04:54:23.327152 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-sv6qs" Feb 28 04:54:23 crc kubenswrapper[5014]: I0228 04:54:23.357170 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-sv6qs"] Feb 28 04:54:23 crc kubenswrapper[5014]: I0228 04:54:23.366486 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-sv6qs"] Feb 28 04:54:23 crc kubenswrapper[5014]: I0228 04:54:23.739064 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b8b564f66-hrmxb" podUID="e048509e-80c3-4102-b7a6-bb3d30f06ec1" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 04:54:24 crc kubenswrapper[5014]: I0228 04:54:24.066825 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b8b564f66-hrmxb" podUID="e048509e-80c3-4102-b7a6-bb3d30f06ec1" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:37654->10.217.0.164:9311: read: connection reset by peer" Feb 28 04:54:24 crc kubenswrapper[5014]: I0228 04:54:24.066832 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b8b564f66-hrmxb" podUID="e048509e-80c3-4102-b7a6-bb3d30f06ec1" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": read tcp 10.217.0.2:37666->10.217.0.164:9311: read: connection reset by peer" Feb 28 04:54:24 crc kubenswrapper[5014]: I0228 04:54:24.067273 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5b8b564f66-hrmxb" podUID="e048509e-80c3-4102-b7a6-bb3d30f06ec1" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.164:9311/healthcheck\": dial tcp 10.217.0.164:9311: connect: connection refused" Feb 28 04:54:24 crc kubenswrapper[5014]: I0228 04:54:24.184634 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="651265e8-74ac-412e-a823-a7e19b2c04b6" path="/var/lib/kubelet/pods/651265e8-74ac-412e-a823-a7e19b2c04b6/volumes" Feb 28 04:54:24 crc kubenswrapper[5014]: I0228 04:54:24.365097 5014 generic.go:334] "Generic (PLEG): container finished" podID="e048509e-80c3-4102-b7a6-bb3d30f06ec1" containerID="37b2d115396c534df38db3022865a781542af116d7d950a4e5cafe5ae0a18697" exitCode=0 Feb 28 04:54:24 crc kubenswrapper[5014]: I0228 04:54:24.365139 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b8b564f66-hrmxb" event={"ID":"e048509e-80c3-4102-b7a6-bb3d30f06ec1","Type":"ContainerDied","Data":"37b2d115396c534df38db3022865a781542af116d7d950a4e5cafe5ae0a18697"} Feb 28 04:54:24 crc kubenswrapper[5014]: I0228 04:54:24.519489 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6cbc78cbb4-6wlp7" podUID="80e6122e-74aa-4ee6-a7a3-4af495cb55b7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 28 04:54:24 crc kubenswrapper[5014]: I0228 04:54:24.522786 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b8b564f66-hrmxb" Feb 28 04:54:24 crc kubenswrapper[5014]: I0228 04:54:24.601977 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e048509e-80c3-4102-b7a6-bb3d30f06ec1-logs\") pod \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\" (UID: \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\") " Feb 28 04:54:24 crc kubenswrapper[5014]: I0228 04:54:24.602144 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e048509e-80c3-4102-b7a6-bb3d30f06ec1-config-data\") pod \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\" (UID: \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\") " Feb 28 04:54:24 crc kubenswrapper[5014]: I0228 04:54:24.602213 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e048509e-80c3-4102-b7a6-bb3d30f06ec1-combined-ca-bundle\") pod \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\" (UID: \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\") " Feb 28 04:54:24 crc kubenswrapper[5014]: I0228 04:54:24.602267 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phdpq\" (UniqueName: \"kubernetes.io/projected/e048509e-80c3-4102-b7a6-bb3d30f06ec1-kube-api-access-phdpq\") pod \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\" (UID: \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\") " Feb 28 04:54:24 crc kubenswrapper[5014]: I0228 04:54:24.602352 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e048509e-80c3-4102-b7a6-bb3d30f06ec1-config-data-custom\") pod \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\" (UID: \"e048509e-80c3-4102-b7a6-bb3d30f06ec1\") " Feb 28 04:54:24 crc kubenswrapper[5014]: I0228 04:54:24.602407 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e048509e-80c3-4102-b7a6-bb3d30f06ec1-logs" (OuterVolumeSpecName: "logs") pod "e048509e-80c3-4102-b7a6-bb3d30f06ec1" (UID: "e048509e-80c3-4102-b7a6-bb3d30f06ec1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:54:24 crc kubenswrapper[5014]: I0228 04:54:24.602835 5014 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e048509e-80c3-4102-b7a6-bb3d30f06ec1-logs\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:24 crc kubenswrapper[5014]: I0228 04:54:24.610460 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e048509e-80c3-4102-b7a6-bb3d30f06ec1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e048509e-80c3-4102-b7a6-bb3d30f06ec1" (UID: "e048509e-80c3-4102-b7a6-bb3d30f06ec1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:24 crc kubenswrapper[5014]: I0228 04:54:24.626276 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e048509e-80c3-4102-b7a6-bb3d30f06ec1-kube-api-access-phdpq" (OuterVolumeSpecName: "kube-api-access-phdpq") pod "e048509e-80c3-4102-b7a6-bb3d30f06ec1" (UID: "e048509e-80c3-4102-b7a6-bb3d30f06ec1"). InnerVolumeSpecName "kube-api-access-phdpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:54:24 crc kubenswrapper[5014]: I0228 04:54:24.630785 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e048509e-80c3-4102-b7a6-bb3d30f06ec1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e048509e-80c3-4102-b7a6-bb3d30f06ec1" (UID: "e048509e-80c3-4102-b7a6-bb3d30f06ec1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:24 crc kubenswrapper[5014]: I0228 04:54:24.658188 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e048509e-80c3-4102-b7a6-bb3d30f06ec1-config-data" (OuterVolumeSpecName: "config-data") pod "e048509e-80c3-4102-b7a6-bb3d30f06ec1" (UID: "e048509e-80c3-4102-b7a6-bb3d30f06ec1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:24 crc kubenswrapper[5014]: I0228 04:54:24.704583 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e048509e-80c3-4102-b7a6-bb3d30f06ec1-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:24 crc kubenswrapper[5014]: I0228 04:54:24.704621 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e048509e-80c3-4102-b7a6-bb3d30f06ec1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:24 crc kubenswrapper[5014]: I0228 04:54:24.704631 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phdpq\" (UniqueName: \"kubernetes.io/projected/e048509e-80c3-4102-b7a6-bb3d30f06ec1-kube-api-access-phdpq\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:24 crc kubenswrapper[5014]: I0228 04:54:24.704641 5014 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e048509e-80c3-4102-b7a6-bb3d30f06ec1-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:25 crc kubenswrapper[5014]: I0228 04:54:25.381067 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5b8b564f66-hrmxb" event={"ID":"e048509e-80c3-4102-b7a6-bb3d30f06ec1","Type":"ContainerDied","Data":"5bcbc0efdd1ffe7725a6cbbc425df9c68e982a8aecd9b046c244e2c1f9e2530a"} Feb 28 04:54:25 crc kubenswrapper[5014]: I0228 04:54:25.381134 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5b8b564f66-hrmxb" Feb 28 04:54:25 crc kubenswrapper[5014]: I0228 04:54:25.381723 5014 scope.go:117] "RemoveContainer" containerID="37b2d115396c534df38db3022865a781542af116d7d950a4e5cafe5ae0a18697" Feb 28 04:54:25 crc kubenswrapper[5014]: I0228 04:54:25.429869 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5b8b564f66-hrmxb"] Feb 28 04:54:25 crc kubenswrapper[5014]: I0228 04:54:25.439244 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5b8b564f66-hrmxb"] Feb 28 04:54:25 crc kubenswrapper[5014]: I0228 04:54:25.452509 5014 scope.go:117] "RemoveContainer" containerID="aceabd4dc758e5ad5002c03cd24c4054af41007750634d96c9cab1edb321d655" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.184187 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e048509e-80c3-4102-b7a6-bb3d30f06ec1" path="/var/lib/kubelet/pods/e048509e-80c3-4102-b7a6-bb3d30f06ec1/volumes" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.258991 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.343228 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-combined-ca-bundle\") pod \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\" (UID: \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\") " Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.343293 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-config-data\") pod \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\" (UID: \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\") " Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.343337 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-scripts\") pod \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\" (UID: \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\") " Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.343417 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-config-data-custom\") pod \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\" (UID: \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\") " Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.343470 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-etc-machine-id\") pod \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\" (UID: \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\") " Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.343541 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njd49\" (UniqueName: \"kubernetes.io/projected/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-kube-api-access-njd49\") pod \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\" (UID: \"bc65cd4a-e488-4a36-a861-18fddd2b9c6b\") " Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.345617 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "bc65cd4a-e488-4a36-a861-18fddd2b9c6b" (UID: "bc65cd4a-e488-4a36-a861-18fddd2b9c6b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.351873 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-kube-api-access-njd49" (OuterVolumeSpecName: "kube-api-access-njd49") pod "bc65cd4a-e488-4a36-a861-18fddd2b9c6b" (UID: "bc65cd4a-e488-4a36-a861-18fddd2b9c6b"). InnerVolumeSpecName "kube-api-access-njd49". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.352543 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-scripts" (OuterVolumeSpecName: "scripts") pod "bc65cd4a-e488-4a36-a861-18fddd2b9c6b" (UID: "bc65cd4a-e488-4a36-a861-18fddd2b9c6b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.366879 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bc65cd4a-e488-4a36-a861-18fddd2b9c6b" (UID: "bc65cd4a-e488-4a36-a861-18fddd2b9c6b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.411968 5014 generic.go:334] "Generic (PLEG): container finished" podID="bc65cd4a-e488-4a36-a861-18fddd2b9c6b" containerID="c333ec1fe46efa9be128b113cedf9fbb11b771efe5408907379b5842279f7fd9" exitCode=0 Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.412226 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bc65cd4a-e488-4a36-a861-18fddd2b9c6b","Type":"ContainerDied","Data":"c333ec1fe46efa9be128b113cedf9fbb11b771efe5408907379b5842279f7fd9"} Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.412291 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.412306 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"bc65cd4a-e488-4a36-a861-18fddd2b9c6b","Type":"ContainerDied","Data":"37e4118db6e3bf3cd19b9621d7c09b6ceac880705a8222b4e589cfd9e88d2ece"} Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.412326 5014 scope.go:117] "RemoveContainer" containerID="a20e7366a8af755e55533b0c39e29b6920b49f5a2790abacba4d0146be917e6c" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.425425 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc65cd4a-e488-4a36-a861-18fddd2b9c6b" (UID: "bc65cd4a-e488-4a36-a861-18fddd2b9c6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.448603 5014 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.448667 5014 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.448678 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njd49\" (UniqueName: \"kubernetes.io/projected/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-kube-api-access-njd49\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.448689 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.448697 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.467825 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-config-data" (OuterVolumeSpecName: "config-data") pod "bc65cd4a-e488-4a36-a861-18fddd2b9c6b" (UID: "bc65cd4a-e488-4a36-a861-18fddd2b9c6b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.492341 5014 scope.go:117] "RemoveContainer" containerID="c333ec1fe46efa9be128b113cedf9fbb11b771efe5408907379b5842279f7fd9" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.527914 5014 scope.go:117] "RemoveContainer" containerID="a20e7366a8af755e55533b0c39e29b6920b49f5a2790abacba4d0146be917e6c" Feb 28 04:54:26 crc kubenswrapper[5014]: E0228 04:54:26.531495 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a20e7366a8af755e55533b0c39e29b6920b49f5a2790abacba4d0146be917e6c\": container with ID starting with a20e7366a8af755e55533b0c39e29b6920b49f5a2790abacba4d0146be917e6c not found: ID does not exist" containerID="a20e7366a8af755e55533b0c39e29b6920b49f5a2790abacba4d0146be917e6c" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.531537 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a20e7366a8af755e55533b0c39e29b6920b49f5a2790abacba4d0146be917e6c"} err="failed to get container status \"a20e7366a8af755e55533b0c39e29b6920b49f5a2790abacba4d0146be917e6c\": rpc error: code = NotFound desc = could not find container \"a20e7366a8af755e55533b0c39e29b6920b49f5a2790abacba4d0146be917e6c\": container with ID starting with a20e7366a8af755e55533b0c39e29b6920b49f5a2790abacba4d0146be917e6c not found: ID does not exist" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.531558 5014 scope.go:117] "RemoveContainer" containerID="c333ec1fe46efa9be128b113cedf9fbb11b771efe5408907379b5842279f7fd9" Feb 28 04:54:26 crc kubenswrapper[5014]: E0228 04:54:26.532008 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c333ec1fe46efa9be128b113cedf9fbb11b771efe5408907379b5842279f7fd9\": container with ID starting with c333ec1fe46efa9be128b113cedf9fbb11b771efe5408907379b5842279f7fd9 not found: ID does not exist" containerID="c333ec1fe46efa9be128b113cedf9fbb11b771efe5408907379b5842279f7fd9" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.532052 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c333ec1fe46efa9be128b113cedf9fbb11b771efe5408907379b5842279f7fd9"} err="failed to get container status \"c333ec1fe46efa9be128b113cedf9fbb11b771efe5408907379b5842279f7fd9\": rpc error: code = NotFound desc = could not find container \"c333ec1fe46efa9be128b113cedf9fbb11b771efe5408907379b5842279f7fd9\": container with ID starting with c333ec1fe46efa9be128b113cedf9fbb11b771efe5408907379b5842279f7fd9 not found: ID does not exist" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.550235 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc65cd4a-e488-4a36-a861-18fddd2b9c6b-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.746771 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.753328 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.770480 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 04:54:26 crc kubenswrapper[5014]: E0228 04:54:26.770952 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc65cd4a-e488-4a36-a861-18fddd2b9c6b" containerName="probe" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.770979 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc65cd4a-e488-4a36-a861-18fddd2b9c6b" containerName="probe" Feb 28 04:54:26 crc kubenswrapper[5014]: E0228 04:54:26.771005 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e048509e-80c3-4102-b7a6-bb3d30f06ec1" containerName="barbican-api-log" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.771016 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="e048509e-80c3-4102-b7a6-bb3d30f06ec1" containerName="barbican-api-log" Feb 28 04:54:26 crc kubenswrapper[5014]: E0228 04:54:26.771040 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="651265e8-74ac-412e-a823-a7e19b2c04b6" containerName="dnsmasq-dns" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.771048 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="651265e8-74ac-412e-a823-a7e19b2c04b6" containerName="dnsmasq-dns" Feb 28 04:54:26 crc kubenswrapper[5014]: E0228 04:54:26.771068 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="651265e8-74ac-412e-a823-a7e19b2c04b6" containerName="init" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.771076 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="651265e8-74ac-412e-a823-a7e19b2c04b6" containerName="init" Feb 28 04:54:26 crc kubenswrapper[5014]: E0228 04:54:26.771104 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e048509e-80c3-4102-b7a6-bb3d30f06ec1" containerName="barbican-api" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.771114 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="e048509e-80c3-4102-b7a6-bb3d30f06ec1" containerName="barbican-api" Feb 28 04:54:26 crc kubenswrapper[5014]: E0228 04:54:26.771129 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc65cd4a-e488-4a36-a861-18fddd2b9c6b" containerName="cinder-scheduler" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.771137 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc65cd4a-e488-4a36-a861-18fddd2b9c6b" containerName="cinder-scheduler" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.771329 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="e048509e-80c3-4102-b7a6-bb3d30f06ec1" containerName="barbican-api" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.771357 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="e048509e-80c3-4102-b7a6-bb3d30f06ec1" containerName="barbican-api-log" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.771380 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="651265e8-74ac-412e-a823-a7e19b2c04b6" containerName="dnsmasq-dns" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.771399 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc65cd4a-e488-4a36-a861-18fddd2b9c6b" containerName="cinder-scheduler" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.771411 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc65cd4a-e488-4a36-a861-18fddd2b9c6b" containerName="probe" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.772997 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.775850 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.788199 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.855768 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29a28811-7002-4b5e-a6d7-8c204bc306db-scripts\") pod \"cinder-scheduler-0\" (UID: \"29a28811-7002-4b5e-a6d7-8c204bc306db\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.855847 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a28811-7002-4b5e-a6d7-8c204bc306db-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"29a28811-7002-4b5e-a6d7-8c204bc306db\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.855910 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29a28811-7002-4b5e-a6d7-8c204bc306db-config-data\") pod \"cinder-scheduler-0\" (UID: \"29a28811-7002-4b5e-a6d7-8c204bc306db\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.856097 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29a28811-7002-4b5e-a6d7-8c204bc306db-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"29a28811-7002-4b5e-a6d7-8c204bc306db\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.856171 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q89z\" (UniqueName: \"kubernetes.io/projected/29a28811-7002-4b5e-a6d7-8c204bc306db-kube-api-access-2q89z\") pod \"cinder-scheduler-0\" (UID: \"29a28811-7002-4b5e-a6d7-8c204bc306db\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.856203 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29a28811-7002-4b5e-a6d7-8c204bc306db-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"29a28811-7002-4b5e-a6d7-8c204bc306db\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.959336 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29a28811-7002-4b5e-a6d7-8c204bc306db-scripts\") pod \"cinder-scheduler-0\" (UID: \"29a28811-7002-4b5e-a6d7-8c204bc306db\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.959942 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a28811-7002-4b5e-a6d7-8c204bc306db-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"29a28811-7002-4b5e-a6d7-8c204bc306db\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.960135 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29a28811-7002-4b5e-a6d7-8c204bc306db-config-data\") pod \"cinder-scheduler-0\" (UID: \"29a28811-7002-4b5e-a6d7-8c204bc306db\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.960331 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29a28811-7002-4b5e-a6d7-8c204bc306db-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"29a28811-7002-4b5e-a6d7-8c204bc306db\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.960501 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q89z\" (UniqueName: \"kubernetes.io/projected/29a28811-7002-4b5e-a6d7-8c204bc306db-kube-api-access-2q89z\") pod \"cinder-scheduler-0\" (UID: \"29a28811-7002-4b5e-a6d7-8c204bc306db\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.960640 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29a28811-7002-4b5e-a6d7-8c204bc306db-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"29a28811-7002-4b5e-a6d7-8c204bc306db\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.960534 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29a28811-7002-4b5e-a6d7-8c204bc306db-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"29a28811-7002-4b5e-a6d7-8c204bc306db\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.964533 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29a28811-7002-4b5e-a6d7-8c204bc306db-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"29a28811-7002-4b5e-a6d7-8c204bc306db\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.965977 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29a28811-7002-4b5e-a6d7-8c204bc306db-config-data\") pod \"cinder-scheduler-0\" (UID: \"29a28811-7002-4b5e-a6d7-8c204bc306db\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.966274 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29a28811-7002-4b5e-a6d7-8c204bc306db-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"29a28811-7002-4b5e-a6d7-8c204bc306db\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.972701 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29a28811-7002-4b5e-a6d7-8c204bc306db-scripts\") pod \"cinder-scheduler-0\" (UID: \"29a28811-7002-4b5e-a6d7-8c204bc306db\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:26 crc kubenswrapper[5014]: I0228 04:54:26.976284 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q89z\" (UniqueName: \"kubernetes.io/projected/29a28811-7002-4b5e-a6d7-8c204bc306db-kube-api-access-2q89z\") pod \"cinder-scheduler-0\" (UID: \"29a28811-7002-4b5e-a6d7-8c204bc306db\") " pod="openstack/cinder-scheduler-0" Feb 28 04:54:27 crc kubenswrapper[5014]: I0228 04:54:27.083991 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-799995d5cd-97xmn" Feb 28 04:54:27 crc kubenswrapper[5014]: I0228 04:54:27.101060 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 28 04:54:27 crc kubenswrapper[5014]: I0228 04:54:27.576062 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-94f64597b-rtxdm" Feb 28 04:54:27 crc kubenswrapper[5014]: I0228 04:54:27.664830 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 28 04:54:27 crc kubenswrapper[5014]: W0228 04:54:27.682198 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29a28811_7002_4b5e_a6d7_8c204bc306db.slice/crio-1979b41a5b06f8c715a71c0ae2ffb4c9809bd9ca04325a7fd1c4a1378ff485c0 WatchSource:0}: Error finding container 1979b41a5b06f8c715a71c0ae2ffb4c9809bd9ca04325a7fd1c4a1378ff485c0: Status 404 returned error can't find the container with id 1979b41a5b06f8c715a71c0ae2ffb4c9809bd9ca04325a7fd1c4a1378ff485c0 Feb 28 04:54:28 crc kubenswrapper[5014]: I0228 04:54:28.194699 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc65cd4a-e488-4a36-a861-18fddd2b9c6b" path="/var/lib/kubelet/pods/bc65cd4a-e488-4a36-a861-18fddd2b9c6b/volumes" Feb 28 04:54:28 crc kubenswrapper[5014]: I0228 04:54:28.442265 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29a28811-7002-4b5e-a6d7-8c204bc306db","Type":"ContainerStarted","Data":"902e3707b68bf6cd288c3fba31b908fce9e4a558185a0fef2dd55818729ea984"} Feb 28 04:54:28 crc kubenswrapper[5014]: I0228 04:54:28.442315 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29a28811-7002-4b5e-a6d7-8c204bc306db","Type":"ContainerStarted","Data":"1979b41a5b06f8c715a71c0ae2ffb4c9809bd9ca04325a7fd1c4a1378ff485c0"} Feb 28 04:54:28 crc kubenswrapper[5014]: I0228 04:54:28.741217 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 28 04:54:29 crc kubenswrapper[5014]: I0228 04:54:29.452891 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"29a28811-7002-4b5e-a6d7-8c204bc306db","Type":"ContainerStarted","Data":"6fe430a1c3c50c1924b7f8aadc0e3ce6db90c427b7a5126cd4b9e688e250851b"} Feb 28 04:54:29 crc kubenswrapper[5014]: I0228 04:54:29.474495 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.474468347 podStartE2EDuration="3.474468347s" podCreationTimestamp="2026-02-28 04:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:54:29.467160031 +0000 UTC m=+1258.137285941" watchObservedRunningTime="2026-02-28 04:54:29.474468347 +0000 UTC m=+1258.144594257" Feb 28 04:54:29 crc kubenswrapper[5014]: I0228 04:54:29.977774 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-58dcfcf9bc-4rtlk" Feb 28 04:54:30 crc kubenswrapper[5014]: I0228 04:54:30.119878 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-94f64597b-rtxdm"] Feb 28 04:54:30 crc kubenswrapper[5014]: I0228 04:54:30.120241 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-94f64597b-rtxdm" podUID="00ee4598-7f76-410b-8737-7086fd0b5aad" containerName="neutron-api" containerID="cri-o://59d50015c9164b1e43e2b391d3dfa8a612b6ce89185cc136fcb117c164a01c45" gracePeriod=30 Feb 28 04:54:30 crc kubenswrapper[5014]: I0228 04:54:30.121084 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-94f64597b-rtxdm" podUID="00ee4598-7f76-410b-8737-7086fd0b5aad" containerName="neutron-httpd" containerID="cri-o://297640f854d1ad0b2237e0bf2efb25418366a3da9d2d0a0b0ff30285ecba1b3c" gracePeriod=30 Feb 28 04:54:30 crc kubenswrapper[5014]: I0228 04:54:30.711451 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:54:30 crc kubenswrapper[5014]: I0228 04:54:30.712554 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-5cd4874894-s6tz4" Feb 28 04:54:31 crc kubenswrapper[5014]: I0228 04:54:31.492602 5014 generic.go:334] "Generic (PLEG): container finished" podID="00ee4598-7f76-410b-8737-7086fd0b5aad" containerID="297640f854d1ad0b2237e0bf2efb25418366a3da9d2d0a0b0ff30285ecba1b3c" exitCode=0 Feb 28 04:54:31 crc kubenswrapper[5014]: I0228 04:54:31.492675 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-94f64597b-rtxdm" event={"ID":"00ee4598-7f76-410b-8737-7086fd0b5aad","Type":"ContainerDied","Data":"297640f854d1ad0b2237e0bf2efb25418366a3da9d2d0a0b0ff30285ecba1b3c"} Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.101838 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.204868 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.207887 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.211298 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.211297 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.211489 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-dgkcg" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.221027 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.226602 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae41ad3-a997-4a4a-91ab-34175d98fb97-combined-ca-bundle\") pod \"openstackclient\" (UID: \"dae41ad3-a997-4a4a-91ab-34175d98fb97\") " pod="openstack/openstackclient" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.226645 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dae41ad3-a997-4a4a-91ab-34175d98fb97-openstack-config\") pod \"openstackclient\" (UID: \"dae41ad3-a997-4a4a-91ab-34175d98fb97\") " pod="openstack/openstackclient" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.226688 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gd24\" (UniqueName: \"kubernetes.io/projected/dae41ad3-a997-4a4a-91ab-34175d98fb97-kube-api-access-7gd24\") pod \"openstackclient\" (UID: \"dae41ad3-a997-4a4a-91ab-34175d98fb97\") " pod="openstack/openstackclient" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.226926 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dae41ad3-a997-4a4a-91ab-34175d98fb97-openstack-config-secret\") pod \"openstackclient\" (UID: \"dae41ad3-a997-4a4a-91ab-34175d98fb97\") " pod="openstack/openstackclient" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.328704 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dae41ad3-a997-4a4a-91ab-34175d98fb97-openstack-config-secret\") pod \"openstackclient\" (UID: \"dae41ad3-a997-4a4a-91ab-34175d98fb97\") " pod="openstack/openstackclient" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.328798 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae41ad3-a997-4a4a-91ab-34175d98fb97-combined-ca-bundle\") pod \"openstackclient\" (UID: \"dae41ad3-a997-4a4a-91ab-34175d98fb97\") " pod="openstack/openstackclient" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.328855 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dae41ad3-a997-4a4a-91ab-34175d98fb97-openstack-config\") pod \"openstackclient\" (UID: \"dae41ad3-a997-4a4a-91ab-34175d98fb97\") " pod="openstack/openstackclient" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.328904 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gd24\" (UniqueName: \"kubernetes.io/projected/dae41ad3-a997-4a4a-91ab-34175d98fb97-kube-api-access-7gd24\") pod \"openstackclient\" (UID: \"dae41ad3-a997-4a4a-91ab-34175d98fb97\") " pod="openstack/openstackclient" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.329796 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/dae41ad3-a997-4a4a-91ab-34175d98fb97-openstack-config\") pod \"openstackclient\" (UID: \"dae41ad3-a997-4a4a-91ab-34175d98fb97\") " pod="openstack/openstackclient" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.331242 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6c68684b95-vvvhf"] Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.333251 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.335739 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/dae41ad3-a997-4a4a-91ab-34175d98fb97-openstack-config-secret\") pod \"openstackclient\" (UID: \"dae41ad3-a997-4a4a-91ab-34175d98fb97\") " pod="openstack/openstackclient" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.340250 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.340471 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.340585 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.370465 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dae41ad3-a997-4a4a-91ab-34175d98fb97-combined-ca-bundle\") pod \"openstackclient\" (UID: \"dae41ad3-a997-4a4a-91ab-34175d98fb97\") " pod="openstack/openstackclient" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.373155 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6c68684b95-vvvhf"] Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.373875 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gd24\" (UniqueName: \"kubernetes.io/projected/dae41ad3-a997-4a4a-91ab-34175d98fb97-kube-api-access-7gd24\") pod \"openstackclient\" (UID: \"dae41ad3-a997-4a4a-91ab-34175d98fb97\") " pod="openstack/openstackclient" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.431740 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d31e889-55bb-4dc4-b470-dcb11b4438a7-config-data\") pod \"swift-proxy-6c68684b95-vvvhf\" (UID: \"6d31e889-55bb-4dc4-b470-dcb11b4438a7\") " pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.431959 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d31e889-55bb-4dc4-b470-dcb11b4438a7-combined-ca-bundle\") pod \"swift-proxy-6c68684b95-vvvhf\" (UID: \"6d31e889-55bb-4dc4-b470-dcb11b4438a7\") " pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.432143 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nb4p\" (UniqueName: \"kubernetes.io/projected/6d31e889-55bb-4dc4-b470-dcb11b4438a7-kube-api-access-2nb4p\") pod \"swift-proxy-6c68684b95-vvvhf\" (UID: \"6d31e889-55bb-4dc4-b470-dcb11b4438a7\") " pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.432250 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d31e889-55bb-4dc4-b470-dcb11b4438a7-public-tls-certs\") pod \"swift-proxy-6c68684b95-vvvhf\" (UID: \"6d31e889-55bb-4dc4-b470-dcb11b4438a7\") " pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.432295 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d31e889-55bb-4dc4-b470-dcb11b4438a7-internal-tls-certs\") pod \"swift-proxy-6c68684b95-vvvhf\" (UID: \"6d31e889-55bb-4dc4-b470-dcb11b4438a7\") " pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.432320 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6d31e889-55bb-4dc4-b470-dcb11b4438a7-etc-swift\") pod \"swift-proxy-6c68684b95-vvvhf\" (UID: \"6d31e889-55bb-4dc4-b470-dcb11b4438a7\") " pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.432406 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d31e889-55bb-4dc4-b470-dcb11b4438a7-log-httpd\") pod \"swift-proxy-6c68684b95-vvvhf\" (UID: \"6d31e889-55bb-4dc4-b470-dcb11b4438a7\") " pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.432452 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d31e889-55bb-4dc4-b470-dcb11b4438a7-run-httpd\") pod \"swift-proxy-6c68684b95-vvvhf\" (UID: \"6d31e889-55bb-4dc4-b470-dcb11b4438a7\") " pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.533400 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d31e889-55bb-4dc4-b470-dcb11b4438a7-config-data\") pod \"swift-proxy-6c68684b95-vvvhf\" (UID: \"6d31e889-55bb-4dc4-b470-dcb11b4438a7\") " pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.533468 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d31e889-55bb-4dc4-b470-dcb11b4438a7-combined-ca-bundle\") pod \"swift-proxy-6c68684b95-vvvhf\" (UID: \"6d31e889-55bb-4dc4-b470-dcb11b4438a7\") " pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.533528 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nb4p\" (UniqueName: \"kubernetes.io/projected/6d31e889-55bb-4dc4-b470-dcb11b4438a7-kube-api-access-2nb4p\") pod \"swift-proxy-6c68684b95-vvvhf\" (UID: \"6d31e889-55bb-4dc4-b470-dcb11b4438a7\") " pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.533566 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d31e889-55bb-4dc4-b470-dcb11b4438a7-public-tls-certs\") pod \"swift-proxy-6c68684b95-vvvhf\" (UID: \"6d31e889-55bb-4dc4-b470-dcb11b4438a7\") " pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.533589 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6d31e889-55bb-4dc4-b470-dcb11b4438a7-etc-swift\") pod \"swift-proxy-6c68684b95-vvvhf\" (UID: \"6d31e889-55bb-4dc4-b470-dcb11b4438a7\") " pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.533612 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d31e889-55bb-4dc4-b470-dcb11b4438a7-internal-tls-certs\") pod \"swift-proxy-6c68684b95-vvvhf\" (UID: \"6d31e889-55bb-4dc4-b470-dcb11b4438a7\") " pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.533665 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d31e889-55bb-4dc4-b470-dcb11b4438a7-log-httpd\") pod \"swift-proxy-6c68684b95-vvvhf\" (UID: \"6d31e889-55bb-4dc4-b470-dcb11b4438a7\") " pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.533690 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d31e889-55bb-4dc4-b470-dcb11b4438a7-run-httpd\") pod \"swift-proxy-6c68684b95-vvvhf\" (UID: \"6d31e889-55bb-4dc4-b470-dcb11b4438a7\") " pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.534179 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d31e889-55bb-4dc4-b470-dcb11b4438a7-run-httpd\") pod \"swift-proxy-6c68684b95-vvvhf\" (UID: \"6d31e889-55bb-4dc4-b470-dcb11b4438a7\") " pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.534514 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6d31e889-55bb-4dc4-b470-dcb11b4438a7-log-httpd\") pod \"swift-proxy-6c68684b95-vvvhf\" (UID: \"6d31e889-55bb-4dc4-b470-dcb11b4438a7\") " pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.537785 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6d31e889-55bb-4dc4-b470-dcb11b4438a7-etc-swift\") pod \"swift-proxy-6c68684b95-vvvhf\" (UID: \"6d31e889-55bb-4dc4-b470-dcb11b4438a7\") " pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.540693 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d31e889-55bb-4dc4-b470-dcb11b4438a7-internal-tls-certs\") pod \"swift-proxy-6c68684b95-vvvhf\" (UID: \"6d31e889-55bb-4dc4-b470-dcb11b4438a7\") " pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.543191 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6d31e889-55bb-4dc4-b470-dcb11b4438a7-combined-ca-bundle\") pod \"swift-proxy-6c68684b95-vvvhf\" (UID: \"6d31e889-55bb-4dc4-b470-dcb11b4438a7\") " pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.543694 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6d31e889-55bb-4dc4-b470-dcb11b4438a7-config-data\") pod \"swift-proxy-6c68684b95-vvvhf\" (UID: \"6d31e889-55bb-4dc4-b470-dcb11b4438a7\") " pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.546626 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.555769 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nb4p\" (UniqueName: \"kubernetes.io/projected/6d31e889-55bb-4dc4-b470-dcb11b4438a7-kube-api-access-2nb4p\") pod \"swift-proxy-6c68684b95-vvvhf\" (UID: \"6d31e889-55bb-4dc4-b470-dcb11b4438a7\") " pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.563175 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6d31e889-55bb-4dc4-b470-dcb11b4438a7-public-tls-certs\") pod \"swift-proxy-6c68684b95-vvvhf\" (UID: \"6d31e889-55bb-4dc4-b470-dcb11b4438a7\") " pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.662070 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-dwb75"] Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.663200 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dwb75" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.674257 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dwb75"] Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.749029 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cwvq\" (UniqueName: \"kubernetes.io/projected/5d9ce69c-9aeb-4120-9abb-d052b56ff801-kube-api-access-2cwvq\") pod \"nova-api-db-create-dwb75\" (UID: \"5d9ce69c-9aeb-4120-9abb-d052b56ff801\") " pod="openstack/nova-api-db-create-dwb75" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.749101 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d9ce69c-9aeb-4120-9abb-d052b56ff801-operator-scripts\") pod \"nova-api-db-create-dwb75\" (UID: \"5d9ce69c-9aeb-4120-9abb-d052b56ff801\") " pod="openstack/nova-api-db-create-dwb75" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.757141 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.757320 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-jg75p"] Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.758499 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jg75p" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.766075 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jg75p"] Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.845899 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-q8snd"] Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.847308 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q8snd" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.853767 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2cwvq\" (UniqueName: \"kubernetes.io/projected/5d9ce69c-9aeb-4120-9abb-d052b56ff801-kube-api-access-2cwvq\") pod \"nova-api-db-create-dwb75\" (UID: \"5d9ce69c-9aeb-4120-9abb-d052b56ff801\") " pod="openstack/nova-api-db-create-dwb75" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.853825 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d9ce69c-9aeb-4120-9abb-d052b56ff801-operator-scripts\") pod \"nova-api-db-create-dwb75\" (UID: \"5d9ce69c-9aeb-4120-9abb-d052b56ff801\") " pod="openstack/nova-api-db-create-dwb75" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.853904 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00673aaf-5abc-4e06-91dd-8a1d71a5e726-operator-scripts\") pod \"nova-cell1-db-create-q8snd\" (UID: \"00673aaf-5abc-4e06-91dd-8a1d71a5e726\") " pod="openstack/nova-cell1-db-create-q8snd" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.853956 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v44zc\" (UniqueName: \"kubernetes.io/projected/7bf06a59-bec2-4829-bf19-65ed9856d251-kube-api-access-v44zc\") pod \"nova-cell0-db-create-jg75p\" (UID: \"7bf06a59-bec2-4829-bf19-65ed9856d251\") " pod="openstack/nova-cell0-db-create-jg75p" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.853977 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bf06a59-bec2-4829-bf19-65ed9856d251-operator-scripts\") pod \"nova-cell0-db-create-jg75p\" (UID: \"7bf06a59-bec2-4829-bf19-65ed9856d251\") " pod="openstack/nova-cell0-db-create-jg75p" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.854025 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p87lx\" (UniqueName: \"kubernetes.io/projected/00673aaf-5abc-4e06-91dd-8a1d71a5e726-kube-api-access-p87lx\") pod \"nova-cell1-db-create-q8snd\" (UID: \"00673aaf-5abc-4e06-91dd-8a1d71a5e726\") " pod="openstack/nova-cell1-db-create-q8snd" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.854559 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d9ce69c-9aeb-4120-9abb-d052b56ff801-operator-scripts\") pod \"nova-api-db-create-dwb75\" (UID: \"5d9ce69c-9aeb-4120-9abb-d052b56ff801\") " pod="openstack/nova-api-db-create-dwb75" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.886012 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-36df-account-create-update-wx84t"] Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.886934 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2cwvq\" (UniqueName: \"kubernetes.io/projected/5d9ce69c-9aeb-4120-9abb-d052b56ff801-kube-api-access-2cwvq\") pod \"nova-api-db-create-dwb75\" (UID: \"5d9ce69c-9aeb-4120-9abb-d052b56ff801\") " pod="openstack/nova-api-db-create-dwb75" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.887519 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-36df-account-create-update-wx84t" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.900150 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.959217 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00673aaf-5abc-4e06-91dd-8a1d71a5e726-operator-scripts\") pod \"nova-cell1-db-create-q8snd\" (UID: \"00673aaf-5abc-4e06-91dd-8a1d71a5e726\") " pod="openstack/nova-cell1-db-create-q8snd" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.959560 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v44zc\" (UniqueName: \"kubernetes.io/projected/7bf06a59-bec2-4829-bf19-65ed9856d251-kube-api-access-v44zc\") pod \"nova-cell0-db-create-jg75p\" (UID: \"7bf06a59-bec2-4829-bf19-65ed9856d251\") " pod="openstack/nova-cell0-db-create-jg75p" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.959591 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bf06a59-bec2-4829-bf19-65ed9856d251-operator-scripts\") pod \"nova-cell0-db-create-jg75p\" (UID: \"7bf06a59-bec2-4829-bf19-65ed9856d251\") " pod="openstack/nova-cell0-db-create-jg75p" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.959679 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p87lx\" (UniqueName: \"kubernetes.io/projected/00673aaf-5abc-4e06-91dd-8a1d71a5e726-kube-api-access-p87lx\") pod \"nova-cell1-db-create-q8snd\" (UID: \"00673aaf-5abc-4e06-91dd-8a1d71a5e726\") " pod="openstack/nova-cell1-db-create-q8snd" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.959968 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00673aaf-5abc-4e06-91dd-8a1d71a5e726-operator-scripts\") pod \"nova-cell1-db-create-q8snd\" (UID: \"00673aaf-5abc-4e06-91dd-8a1d71a5e726\") " pod="openstack/nova-cell1-db-create-q8snd" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.960613 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bf06a59-bec2-4829-bf19-65ed9856d251-operator-scripts\") pod \"nova-cell0-db-create-jg75p\" (UID: \"7bf06a59-bec2-4829-bf19-65ed9856d251\") " pod="openstack/nova-cell0-db-create-jg75p" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.964250 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-q8snd"] Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.985549 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p87lx\" (UniqueName: \"kubernetes.io/projected/00673aaf-5abc-4e06-91dd-8a1d71a5e726-kube-api-access-p87lx\") pod \"nova-cell1-db-create-q8snd\" (UID: \"00673aaf-5abc-4e06-91dd-8a1d71a5e726\") " pod="openstack/nova-cell1-db-create-q8snd" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.988599 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v44zc\" (UniqueName: \"kubernetes.io/projected/7bf06a59-bec2-4829-bf19-65ed9856d251-kube-api-access-v44zc\") pod \"nova-cell0-db-create-jg75p\" (UID: \"7bf06a59-bec2-4829-bf19-65ed9856d251\") " pod="openstack/nova-cell0-db-create-jg75p" Feb 28 04:54:32 crc kubenswrapper[5014]: I0228 04:54:32.991675 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q8snd" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:32.999957 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dwb75" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.003333 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-36df-account-create-update-wx84t"] Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.060801 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-3ea3-account-create-update-h5bnp"] Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.062076 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3ea3-account-create-update-h5bnp" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.062116 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5ba47cb-6efc-46ac-97df-b895cac925a3-operator-scripts\") pod \"nova-api-36df-account-create-update-wx84t\" (UID: \"d5ba47cb-6efc-46ac-97df-b895cac925a3\") " pod="openstack/nova-api-36df-account-create-update-wx84t" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.062194 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqrsc\" (UniqueName: \"kubernetes.io/projected/d5ba47cb-6efc-46ac-97df-b895cac925a3-kube-api-access-bqrsc\") pod \"nova-api-36df-account-create-update-wx84t\" (UID: \"d5ba47cb-6efc-46ac-97df-b895cac925a3\") " pod="openstack/nova-api-36df-account-create-update-wx84t" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.064213 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.086869 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3ea3-account-create-update-h5bnp"] Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.136362 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jg75p" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.163635 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5ba47cb-6efc-46ac-97df-b895cac925a3-operator-scripts\") pod \"nova-api-36df-account-create-update-wx84t\" (UID: \"d5ba47cb-6efc-46ac-97df-b895cac925a3\") " pod="openstack/nova-api-36df-account-create-update-wx84t" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.163693 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fd00cf3-841a-4ecc-b28c-8ba9d6d00894-operator-scripts\") pod \"nova-cell0-3ea3-account-create-update-h5bnp\" (UID: \"0fd00cf3-841a-4ecc-b28c-8ba9d6d00894\") " pod="openstack/nova-cell0-3ea3-account-create-update-h5bnp" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.163723 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqrsc\" (UniqueName: \"kubernetes.io/projected/d5ba47cb-6efc-46ac-97df-b895cac925a3-kube-api-access-bqrsc\") pod \"nova-api-36df-account-create-update-wx84t\" (UID: \"d5ba47cb-6efc-46ac-97df-b895cac925a3\") " pod="openstack/nova-api-36df-account-create-update-wx84t" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.163769 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js6t7\" (UniqueName: \"kubernetes.io/projected/0fd00cf3-841a-4ecc-b28c-8ba9d6d00894-kube-api-access-js6t7\") pod \"nova-cell0-3ea3-account-create-update-h5bnp\" (UID: \"0fd00cf3-841a-4ecc-b28c-8ba9d6d00894\") " pod="openstack/nova-cell0-3ea3-account-create-update-h5bnp" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.164941 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5ba47cb-6efc-46ac-97df-b895cac925a3-operator-scripts\") pod \"nova-api-36df-account-create-update-wx84t\" (UID: \"d5ba47cb-6efc-46ac-97df-b895cac925a3\") " pod="openstack/nova-api-36df-account-create-update-wx84t" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.181531 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqrsc\" (UniqueName: \"kubernetes.io/projected/d5ba47cb-6efc-46ac-97df-b895cac925a3-kube-api-access-bqrsc\") pod \"nova-api-36df-account-create-update-wx84t\" (UID: \"d5ba47cb-6efc-46ac-97df-b895cac925a3\") " pod="openstack/nova-api-36df-account-create-update-wx84t" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.256558 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.266002 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-ceba-account-create-update-cjxpn"] Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.266072 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fd00cf3-841a-4ecc-b28c-8ba9d6d00894-operator-scripts\") pod \"nova-cell0-3ea3-account-create-update-h5bnp\" (UID: \"0fd00cf3-841a-4ecc-b28c-8ba9d6d00894\") " pod="openstack/nova-cell0-3ea3-account-create-update-h5bnp" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.266166 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js6t7\" (UniqueName: \"kubernetes.io/projected/0fd00cf3-841a-4ecc-b28c-8ba9d6d00894-kube-api-access-js6t7\") pod \"nova-cell0-3ea3-account-create-update-h5bnp\" (UID: \"0fd00cf3-841a-4ecc-b28c-8ba9d6d00894\") " pod="openstack/nova-cell0-3ea3-account-create-update-h5bnp" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.267131 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fd00cf3-841a-4ecc-b28c-8ba9d6d00894-operator-scripts\") pod \"nova-cell0-3ea3-account-create-update-h5bnp\" (UID: \"0fd00cf3-841a-4ecc-b28c-8ba9d6d00894\") " pod="openstack/nova-cell0-3ea3-account-create-update-h5bnp" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.267523 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ceba-account-create-update-cjxpn" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.270035 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.276463 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ceba-account-create-update-cjxpn"] Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.286973 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js6t7\" (UniqueName: \"kubernetes.io/projected/0fd00cf3-841a-4ecc-b28c-8ba9d6d00894-kube-api-access-js6t7\") pod \"nova-cell0-3ea3-account-create-update-h5bnp\" (UID: \"0fd00cf3-841a-4ecc-b28c-8ba9d6d00894\") " pod="openstack/nova-cell0-3ea3-account-create-update-h5bnp" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.308612 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-36df-account-create-update-wx84t" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.367783 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1d0cd80-c46f-4f36-904f-ce3128cc997f-operator-scripts\") pod \"nova-cell1-ceba-account-create-update-cjxpn\" (UID: \"c1d0cd80-c46f-4f36-904f-ce3128cc997f\") " pod="openstack/nova-cell1-ceba-account-create-update-cjxpn" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.367946 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kzx4\" (UniqueName: \"kubernetes.io/projected/c1d0cd80-c46f-4f36-904f-ce3128cc997f-kube-api-access-4kzx4\") pod \"nova-cell1-ceba-account-create-update-cjxpn\" (UID: \"c1d0cd80-c46f-4f36-904f-ce3128cc997f\") " pod="openstack/nova-cell1-ceba-account-create-update-cjxpn" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.390637 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3ea3-account-create-update-h5bnp" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.469297 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1d0cd80-c46f-4f36-904f-ce3128cc997f-operator-scripts\") pod \"nova-cell1-ceba-account-create-update-cjxpn\" (UID: \"c1d0cd80-c46f-4f36-904f-ce3128cc997f\") " pod="openstack/nova-cell1-ceba-account-create-update-cjxpn" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.469380 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kzx4\" (UniqueName: \"kubernetes.io/projected/c1d0cd80-c46f-4f36-904f-ce3128cc997f-kube-api-access-4kzx4\") pod \"nova-cell1-ceba-account-create-update-cjxpn\" (UID: \"c1d0cd80-c46f-4f36-904f-ce3128cc997f\") " pod="openstack/nova-cell1-ceba-account-create-update-cjxpn" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.471379 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1d0cd80-c46f-4f36-904f-ce3128cc997f-operator-scripts\") pod \"nova-cell1-ceba-account-create-update-cjxpn\" (UID: \"c1d0cd80-c46f-4f36-904f-ce3128cc997f\") " pod="openstack/nova-cell1-ceba-account-create-update-cjxpn" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.488028 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kzx4\" (UniqueName: \"kubernetes.io/projected/c1d0cd80-c46f-4f36-904f-ce3128cc997f-kube-api-access-4kzx4\") pod \"nova-cell1-ceba-account-create-update-cjxpn\" (UID: \"c1d0cd80-c46f-4f36-904f-ce3128cc997f\") " pod="openstack/nova-cell1-ceba-account-create-update-cjxpn" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.505517 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6c68684b95-vvvhf"] Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.515979 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"dae41ad3-a997-4a4a-91ab-34175d98fb97","Type":"ContainerStarted","Data":"7338cd21f8e06e4d5bc65d7cd042e7dec4e341db0a6031ace11e7b958e6f0b83"} Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.523064 5014 generic.go:334] "Generic (PLEG): container finished" podID="00ee4598-7f76-410b-8737-7086fd0b5aad" containerID="59d50015c9164b1e43e2b391d3dfa8a612b6ce89185cc136fcb117c164a01c45" exitCode=0 Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.523100 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-94f64597b-rtxdm" event={"ID":"00ee4598-7f76-410b-8737-7086fd0b5aad","Type":"ContainerDied","Data":"59d50015c9164b1e43e2b391d3dfa8a612b6ce89185cc136fcb117c164a01c45"} Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.586709 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-94f64597b-rtxdm" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.591968 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ceba-account-create-update-cjxpn" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.613916 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-q8snd"] Feb 28 04:54:33 crc kubenswrapper[5014]: W0228 04:54:33.651331 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod00673aaf_5abc_4e06_91dd_8a1d71a5e726.slice/crio-d192b3e4f35a8b41c650d69c023e6ea4a5bb339c8e83a8518762b4431a77a46b WatchSource:0}: Error finding container d192b3e4f35a8b41c650d69c023e6ea4a5bb339c8e83a8518762b4431a77a46b: Status 404 returned error can't find the container with id d192b3e4f35a8b41c650d69c023e6ea4a5bb339c8e83a8518762b4431a77a46b Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.673099 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj6sb\" (UniqueName: \"kubernetes.io/projected/00ee4598-7f76-410b-8737-7086fd0b5aad-kube-api-access-bj6sb\") pod \"00ee4598-7f76-410b-8737-7086fd0b5aad\" (UID: \"00ee4598-7f76-410b-8737-7086fd0b5aad\") " Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.673679 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/00ee4598-7f76-410b-8737-7086fd0b5aad-ovndb-tls-certs\") pod \"00ee4598-7f76-410b-8737-7086fd0b5aad\" (UID: \"00ee4598-7f76-410b-8737-7086fd0b5aad\") " Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.674586 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/00ee4598-7f76-410b-8737-7086fd0b5aad-config\") pod \"00ee4598-7f76-410b-8737-7086fd0b5aad\" (UID: \"00ee4598-7f76-410b-8737-7086fd0b5aad\") " Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.674647 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/00ee4598-7f76-410b-8737-7086fd0b5aad-httpd-config\") pod \"00ee4598-7f76-410b-8737-7086fd0b5aad\" (UID: \"00ee4598-7f76-410b-8737-7086fd0b5aad\") " Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.674992 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00ee4598-7f76-410b-8737-7086fd0b5aad-combined-ca-bundle\") pod \"00ee4598-7f76-410b-8737-7086fd0b5aad\" (UID: \"00ee4598-7f76-410b-8737-7086fd0b5aad\") " Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.679016 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00ee4598-7f76-410b-8737-7086fd0b5aad-kube-api-access-bj6sb" (OuterVolumeSpecName: "kube-api-access-bj6sb") pod "00ee4598-7f76-410b-8737-7086fd0b5aad" (UID: "00ee4598-7f76-410b-8737-7086fd0b5aad"). InnerVolumeSpecName "kube-api-access-bj6sb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.692174 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00ee4598-7f76-410b-8737-7086fd0b5aad-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "00ee4598-7f76-410b-8737-7086fd0b5aad" (UID: "00ee4598-7f76-410b-8737-7086fd0b5aad"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.780237 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj6sb\" (UniqueName: \"kubernetes.io/projected/00ee4598-7f76-410b-8737-7086fd0b5aad-kube-api-access-bj6sb\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.780265 5014 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/00ee4598-7f76-410b-8737-7086fd0b5aad-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.844501 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00ee4598-7f76-410b-8737-7086fd0b5aad-config" (OuterVolumeSpecName: "config") pod "00ee4598-7f76-410b-8737-7086fd0b5aad" (UID: "00ee4598-7f76-410b-8737-7086fd0b5aad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.870897 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00ee4598-7f76-410b-8737-7086fd0b5aad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00ee4598-7f76-410b-8737-7086fd0b5aad" (UID: "00ee4598-7f76-410b-8737-7086fd0b5aad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.878936 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00ee4598-7f76-410b-8737-7086fd0b5aad-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "00ee4598-7f76-410b-8737-7086fd0b5aad" (UID: "00ee4598-7f76-410b-8737-7086fd0b5aad"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.883971 5014 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/00ee4598-7f76-410b-8737-7086fd0b5aad-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.884012 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/00ee4598-7f76-410b-8737-7086fd0b5aad-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.884023 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00ee4598-7f76-410b-8737-7086fd0b5aad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.932424 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-dwb75"] Feb 28 04:54:33 crc kubenswrapper[5014]: I0228 04:54:33.944300 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jg75p"] Feb 28 04:54:34 crc kubenswrapper[5014]: W0228 04:54:34.121282 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5ba47cb_6efc_46ac_97df_b895cac925a3.slice/crio-818dc48eaedf9d587d9dda89e67583d65f887dea75d5b6f4fa6e30b99f6e9c56 WatchSource:0}: Error finding container 818dc48eaedf9d587d9dda89e67583d65f887dea75d5b6f4fa6e30b99f6e9c56: Status 404 returned error can't find the container with id 818dc48eaedf9d587d9dda89e67583d65f887dea75d5b6f4fa6e30b99f6e9c56 Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.125932 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-36df-account-create-update-wx84t"] Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.134732 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-3ea3-account-create-update-h5bnp"] Feb 28 04:54:34 crc kubenswrapper[5014]: W0228 04:54:34.136935 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fd00cf3_841a_4ecc_b28c_8ba9d6d00894.slice/crio-bf2d98046f9e8df69d60108b581f79a72b8c740861e8c18f87371cbc99957551 WatchSource:0}: Error finding container bf2d98046f9e8df69d60108b581f79a72b8c740861e8c18f87371cbc99957551: Status 404 returned error can't find the container with id bf2d98046f9e8df69d60108b581f79a72b8c740861e8c18f87371cbc99957551 Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.282254 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-ceba-account-create-update-cjxpn"] Feb 28 04:54:34 crc kubenswrapper[5014]: W0228 04:54:34.298162 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1d0cd80_c46f_4f36_904f_ce3128cc997f.slice/crio-8bb607c7e2c3f077b2fc2fc023035d134023fef50acba55c7075946b8d2319c8 WatchSource:0}: Error finding container 8bb607c7e2c3f077b2fc2fc023035d134023fef50acba55c7075946b8d2319c8: Status 404 returned error can't find the container with id 8bb607c7e2c3f077b2fc2fc023035d134023fef50acba55c7075946b8d2319c8 Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.519567 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6cbc78cbb4-6wlp7" podUID="80e6122e-74aa-4ee6-a7a3-4af495cb55b7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.547564 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-36df-account-create-update-wx84t" event={"ID":"d5ba47cb-6efc-46ac-97df-b895cac925a3","Type":"ContainerStarted","Data":"9aaa24f7f9b70636e0bbcf691ed857f8069615926ecea96387c1c3af532343a1"} Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.547609 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-36df-account-create-update-wx84t" event={"ID":"d5ba47cb-6efc-46ac-97df-b895cac925a3","Type":"ContainerStarted","Data":"818dc48eaedf9d587d9dda89e67583d65f887dea75d5b6f4fa6e30b99f6e9c56"} Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.550784 5014 generic.go:334] "Generic (PLEG): container finished" podID="5d9ce69c-9aeb-4120-9abb-d052b56ff801" containerID="3df83a2367c173a24ad720e7edd8074d1b7150abc920d9d6a5ead167db2f5ba1" exitCode=0 Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.550873 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dwb75" event={"ID":"5d9ce69c-9aeb-4120-9abb-d052b56ff801","Type":"ContainerDied","Data":"3df83a2367c173a24ad720e7edd8074d1b7150abc920d9d6a5ead167db2f5ba1"} Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.550899 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dwb75" event={"ID":"5d9ce69c-9aeb-4120-9abb-d052b56ff801","Type":"ContainerStarted","Data":"58ea9862bfcb06626cfb9d660b683470c914fe6e703ccf050a46433f88d951ce"} Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.552475 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ceba-account-create-update-cjxpn" event={"ID":"c1d0cd80-c46f-4f36-904f-ce3128cc997f","Type":"ContainerStarted","Data":"8bb607c7e2c3f077b2fc2fc023035d134023fef50acba55c7075946b8d2319c8"} Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.557300 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-94f64597b-rtxdm" event={"ID":"00ee4598-7f76-410b-8737-7086fd0b5aad","Type":"ContainerDied","Data":"99559b79f3e0891e65229ee07b0dcd9604ff785dd3e7ad2357948e09b9210b0b"} Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.557352 5014 scope.go:117] "RemoveContainer" containerID="297640f854d1ad0b2237e0bf2efb25418366a3da9d2d0a0b0ff30285ecba1b3c" Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.557464 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-94f64597b-rtxdm" Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.566198 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6c68684b95-vvvhf" event={"ID":"6d31e889-55bb-4dc4-b470-dcb11b4438a7","Type":"ContainerStarted","Data":"4348b89422f68af824b320f21290ae8de4136bc031e6db54cd9012e7348375e4"} Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.566235 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6c68684b95-vvvhf" event={"ID":"6d31e889-55bb-4dc4-b470-dcb11b4438a7","Type":"ContainerStarted","Data":"644593def7eb3c7f0eebac296f1f963da218eba89fcd79f746d80461ae7e01fc"} Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.566246 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6c68684b95-vvvhf" event={"ID":"6d31e889-55bb-4dc4-b470-dcb11b4438a7","Type":"ContainerStarted","Data":"66fd07754f2439f3e433528db9fb104bf3d36e127513f322d6c0d12ac1bc00d7"} Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.567078 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.567108 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.572524 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-36df-account-create-update-wx84t" podStartSLOduration=2.572502905 podStartE2EDuration="2.572502905s" podCreationTimestamp="2026-02-28 04:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:54:34.562011483 +0000 UTC m=+1263.232137393" watchObservedRunningTime="2026-02-28 04:54:34.572502905 +0000 UTC m=+1263.242628815" Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.575838 5014 generic.go:334] "Generic (PLEG): container finished" podID="00673aaf-5abc-4e06-91dd-8a1d71a5e726" containerID="e46198fde7de213649a0d2fa670fb8f6b899c1c190ad6f15fa59734b0d4103c0" exitCode=0 Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.575910 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q8snd" event={"ID":"00673aaf-5abc-4e06-91dd-8a1d71a5e726","Type":"ContainerDied","Data":"e46198fde7de213649a0d2fa670fb8f6b899c1c190ad6f15fa59734b0d4103c0"} Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.575937 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q8snd" event={"ID":"00673aaf-5abc-4e06-91dd-8a1d71a5e726","Type":"ContainerStarted","Data":"d192b3e4f35a8b41c650d69c023e6ea4a5bb339c8e83a8518762b4431a77a46b"} Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.585567 5014 generic.go:334] "Generic (PLEG): container finished" podID="7bf06a59-bec2-4829-bf19-65ed9856d251" containerID="a76224304bda64d72ab7c220934912e2a34af3b6b9aeda4b555afbea41665c59" exitCode=0 Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.585629 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jg75p" event={"ID":"7bf06a59-bec2-4829-bf19-65ed9856d251","Type":"ContainerDied","Data":"a76224304bda64d72ab7c220934912e2a34af3b6b9aeda4b555afbea41665c59"} Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.585654 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jg75p" event={"ID":"7bf06a59-bec2-4829-bf19-65ed9856d251","Type":"ContainerStarted","Data":"764fcf57985e0a7ac9af3c9c394962239b4e361fa14b00466d7ec1c83033e98b"} Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.608582 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3ea3-account-create-update-h5bnp" event={"ID":"0fd00cf3-841a-4ecc-b28c-8ba9d6d00894","Type":"ContainerStarted","Data":"5bd27d3174a96a3ec9c5fdc4c8c0b5229913fe405521d965c5cdf1addbfbcf56"} Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.608641 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3ea3-account-create-update-h5bnp" event={"ID":"0fd00cf3-841a-4ecc-b28c-8ba9d6d00894","Type":"ContainerStarted","Data":"bf2d98046f9e8df69d60108b581f79a72b8c740861e8c18f87371cbc99957551"} Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.653126 5014 scope.go:117] "RemoveContainer" containerID="59d50015c9164b1e43e2b391d3dfa8a612b6ce89185cc136fcb117c164a01c45" Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.656547 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-94f64597b-rtxdm"] Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.668791 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-94f64597b-rtxdm"] Feb 28 04:54:34 crc kubenswrapper[5014]: I0228 04:54:34.674520 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6c68684b95-vvvhf" podStartSLOduration=2.674500802 podStartE2EDuration="2.674500802s" podCreationTimestamp="2026-02-28 04:54:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:54:34.625722093 +0000 UTC m=+1263.295848003" watchObservedRunningTime="2026-02-28 04:54:34.674500802 +0000 UTC m=+1263.344626712" Feb 28 04:54:35 crc kubenswrapper[5014]: I0228 04:54:35.628379 5014 generic.go:334] "Generic (PLEG): container finished" podID="0fd00cf3-841a-4ecc-b28c-8ba9d6d00894" containerID="5bd27d3174a96a3ec9c5fdc4c8c0b5229913fe405521d965c5cdf1addbfbcf56" exitCode=0 Feb 28 04:54:35 crc kubenswrapper[5014]: I0228 04:54:35.628484 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3ea3-account-create-update-h5bnp" event={"ID":"0fd00cf3-841a-4ecc-b28c-8ba9d6d00894","Type":"ContainerDied","Data":"5bd27d3174a96a3ec9c5fdc4c8c0b5229913fe405521d965c5cdf1addbfbcf56"} Feb 28 04:54:35 crc kubenswrapper[5014]: I0228 04:54:35.632643 5014 generic.go:334] "Generic (PLEG): container finished" podID="d5ba47cb-6efc-46ac-97df-b895cac925a3" containerID="9aaa24f7f9b70636e0bbcf691ed857f8069615926ecea96387c1c3af532343a1" exitCode=0 Feb 28 04:54:35 crc kubenswrapper[5014]: I0228 04:54:35.632730 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-36df-account-create-update-wx84t" event={"ID":"d5ba47cb-6efc-46ac-97df-b895cac925a3","Type":"ContainerDied","Data":"9aaa24f7f9b70636e0bbcf691ed857f8069615926ecea96387c1c3af532343a1"} Feb 28 04:54:35 crc kubenswrapper[5014]: I0228 04:54:35.644377 5014 generic.go:334] "Generic (PLEG): container finished" podID="c1d0cd80-c46f-4f36-904f-ce3128cc997f" containerID="820b2372de6b6cbb263f873004c058b9242e9297ac2dd9d2e297a9a9ebd46155" exitCode=0 Feb 28 04:54:35 crc kubenswrapper[5014]: I0228 04:54:35.644441 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ceba-account-create-update-cjxpn" event={"ID":"c1d0cd80-c46f-4f36-904f-ce3128cc997f","Type":"ContainerDied","Data":"820b2372de6b6cbb263f873004c058b9242e9297ac2dd9d2e297a9a9ebd46155"} Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.107139 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q8snd" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.187723 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00ee4598-7f76-410b-8737-7086fd0b5aad" path="/var/lib/kubelet/pods/00ee4598-7f76-410b-8737-7086fd0b5aad/volumes" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.267654 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dwb75" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.273789 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p87lx\" (UniqueName: \"kubernetes.io/projected/00673aaf-5abc-4e06-91dd-8a1d71a5e726-kube-api-access-p87lx\") pod \"00673aaf-5abc-4e06-91dd-8a1d71a5e726\" (UID: \"00673aaf-5abc-4e06-91dd-8a1d71a5e726\") " Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.274114 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00673aaf-5abc-4e06-91dd-8a1d71a5e726-operator-scripts\") pod \"00673aaf-5abc-4e06-91dd-8a1d71a5e726\" (UID: \"00673aaf-5abc-4e06-91dd-8a1d71a5e726\") " Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.274538 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00673aaf-5abc-4e06-91dd-8a1d71a5e726-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "00673aaf-5abc-4e06-91dd-8a1d71a5e726" (UID: "00673aaf-5abc-4e06-91dd-8a1d71a5e726"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.275323 5014 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/00673aaf-5abc-4e06-91dd-8a1d71a5e726-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.276387 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jg75p" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.285407 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00673aaf-5abc-4e06-91dd-8a1d71a5e726-kube-api-access-p87lx" (OuterVolumeSpecName: "kube-api-access-p87lx") pod "00673aaf-5abc-4e06-91dd-8a1d71a5e726" (UID: "00673aaf-5abc-4e06-91dd-8a1d71a5e726"). InnerVolumeSpecName "kube-api-access-p87lx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.291623 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3ea3-account-create-update-h5bnp" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.377442 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2cwvq\" (UniqueName: \"kubernetes.io/projected/5d9ce69c-9aeb-4120-9abb-d052b56ff801-kube-api-access-2cwvq\") pod \"5d9ce69c-9aeb-4120-9abb-d052b56ff801\" (UID: \"5d9ce69c-9aeb-4120-9abb-d052b56ff801\") " Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.377490 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v44zc\" (UniqueName: \"kubernetes.io/projected/7bf06a59-bec2-4829-bf19-65ed9856d251-kube-api-access-v44zc\") pod \"7bf06a59-bec2-4829-bf19-65ed9856d251\" (UID: \"7bf06a59-bec2-4829-bf19-65ed9856d251\") " Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.377545 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d9ce69c-9aeb-4120-9abb-d052b56ff801-operator-scripts\") pod \"5d9ce69c-9aeb-4120-9abb-d052b56ff801\" (UID: \"5d9ce69c-9aeb-4120-9abb-d052b56ff801\") " Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.377907 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d9ce69c-9aeb-4120-9abb-d052b56ff801-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d9ce69c-9aeb-4120-9abb-d052b56ff801" (UID: "5d9ce69c-9aeb-4120-9abb-d052b56ff801"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.378228 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p87lx\" (UniqueName: \"kubernetes.io/projected/00673aaf-5abc-4e06-91dd-8a1d71a5e726-kube-api-access-p87lx\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.378246 5014 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d9ce69c-9aeb-4120-9abb-d052b56ff801-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.382101 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bf06a59-bec2-4829-bf19-65ed9856d251-kube-api-access-v44zc" (OuterVolumeSpecName: "kube-api-access-v44zc") pod "7bf06a59-bec2-4829-bf19-65ed9856d251" (UID: "7bf06a59-bec2-4829-bf19-65ed9856d251"). InnerVolumeSpecName "kube-api-access-v44zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.382905 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d9ce69c-9aeb-4120-9abb-d052b56ff801-kube-api-access-2cwvq" (OuterVolumeSpecName: "kube-api-access-2cwvq") pod "5d9ce69c-9aeb-4120-9abb-d052b56ff801" (UID: "5d9ce69c-9aeb-4120-9abb-d052b56ff801"). InnerVolumeSpecName "kube-api-access-2cwvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.479187 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js6t7\" (UniqueName: \"kubernetes.io/projected/0fd00cf3-841a-4ecc-b28c-8ba9d6d00894-kube-api-access-js6t7\") pod \"0fd00cf3-841a-4ecc-b28c-8ba9d6d00894\" (UID: \"0fd00cf3-841a-4ecc-b28c-8ba9d6d00894\") " Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.479228 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bf06a59-bec2-4829-bf19-65ed9856d251-operator-scripts\") pod \"7bf06a59-bec2-4829-bf19-65ed9856d251\" (UID: \"7bf06a59-bec2-4829-bf19-65ed9856d251\") " Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.479420 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fd00cf3-841a-4ecc-b28c-8ba9d6d00894-operator-scripts\") pod \"0fd00cf3-841a-4ecc-b28c-8ba9d6d00894\" (UID: \"0fd00cf3-841a-4ecc-b28c-8ba9d6d00894\") " Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.479729 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bf06a59-bec2-4829-bf19-65ed9856d251-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7bf06a59-bec2-4829-bf19-65ed9856d251" (UID: "7bf06a59-bec2-4829-bf19-65ed9856d251"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.479878 5014 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7bf06a59-bec2-4829-bf19-65ed9856d251-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.479896 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2cwvq\" (UniqueName: \"kubernetes.io/projected/5d9ce69c-9aeb-4120-9abb-d052b56ff801-kube-api-access-2cwvq\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.479908 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v44zc\" (UniqueName: \"kubernetes.io/projected/7bf06a59-bec2-4829-bf19-65ed9856d251-kube-api-access-v44zc\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.480129 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fd00cf3-841a-4ecc-b28c-8ba9d6d00894-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0fd00cf3-841a-4ecc-b28c-8ba9d6d00894" (UID: "0fd00cf3-841a-4ecc-b28c-8ba9d6d00894"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.486078 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fd00cf3-841a-4ecc-b28c-8ba9d6d00894-kube-api-access-js6t7" (OuterVolumeSpecName: "kube-api-access-js6t7") pod "0fd00cf3-841a-4ecc-b28c-8ba9d6d00894" (UID: "0fd00cf3-841a-4ecc-b28c-8ba9d6d00894"). InnerVolumeSpecName "kube-api-access-js6t7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.581965 5014 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fd00cf3-841a-4ecc-b28c-8ba9d6d00894-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.582007 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js6t7\" (UniqueName: \"kubernetes.io/projected/0fd00cf3-841a-4ecc-b28c-8ba9d6d00894-kube-api-access-js6t7\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.658673 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-q8snd" event={"ID":"00673aaf-5abc-4e06-91dd-8a1d71a5e726","Type":"ContainerDied","Data":"d192b3e4f35a8b41c650d69c023e6ea4a5bb339c8e83a8518762b4431a77a46b"} Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.659879 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d192b3e4f35a8b41c650d69c023e6ea4a5bb339c8e83a8518762b4431a77a46b" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.658709 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-q8snd" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.660265 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jg75p" event={"ID":"7bf06a59-bec2-4829-bf19-65ed9856d251","Type":"ContainerDied","Data":"764fcf57985e0a7ac9af3c9c394962239b4e361fa14b00466d7ec1c83033e98b"} Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.660303 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="764fcf57985e0a7ac9af3c9c394962239b4e361fa14b00466d7ec1c83033e98b" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.660361 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jg75p" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.663469 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-3ea3-account-create-update-h5bnp" event={"ID":"0fd00cf3-841a-4ecc-b28c-8ba9d6d00894","Type":"ContainerDied","Data":"bf2d98046f9e8df69d60108b581f79a72b8c740861e8c18f87371cbc99957551"} Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.663496 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-3ea3-account-create-update-h5bnp" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.663504 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf2d98046f9e8df69d60108b581f79a72b8c740861e8c18f87371cbc99957551" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.665908 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-dwb75" Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.667252 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-dwb75" event={"ID":"5d9ce69c-9aeb-4120-9abb-d052b56ff801","Type":"ContainerDied","Data":"58ea9862bfcb06626cfb9d660b683470c914fe6e703ccf050a46433f88d951ce"} Feb 28 04:54:36 crc kubenswrapper[5014]: I0228 04:54:36.667290 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58ea9862bfcb06626cfb9d660b683470c914fe6e703ccf050a46433f88d951ce" Feb 28 04:54:37 crc kubenswrapper[5014]: I0228 04:54:37.254790 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ceba-account-create-update-cjxpn" Feb 28 04:54:37 crc kubenswrapper[5014]: I0228 04:54:37.263161 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-36df-account-create-update-wx84t" Feb 28 04:54:37 crc kubenswrapper[5014]: I0228 04:54:37.406285 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 28 04:54:37 crc kubenswrapper[5014]: I0228 04:54:37.406649 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kzx4\" (UniqueName: \"kubernetes.io/projected/c1d0cd80-c46f-4f36-904f-ce3128cc997f-kube-api-access-4kzx4\") pod \"c1d0cd80-c46f-4f36-904f-ce3128cc997f\" (UID: \"c1d0cd80-c46f-4f36-904f-ce3128cc997f\") " Feb 28 04:54:37 crc kubenswrapper[5014]: I0228 04:54:37.408913 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5ba47cb-6efc-46ac-97df-b895cac925a3-operator-scripts\") pod \"d5ba47cb-6efc-46ac-97df-b895cac925a3\" (UID: \"d5ba47cb-6efc-46ac-97df-b895cac925a3\") " Feb 28 04:54:37 crc kubenswrapper[5014]: I0228 04:54:37.409447 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1d0cd80-c46f-4f36-904f-ce3128cc997f-operator-scripts\") pod \"c1d0cd80-c46f-4f36-904f-ce3128cc997f\" (UID: \"c1d0cd80-c46f-4f36-904f-ce3128cc997f\") " Feb 28 04:54:37 crc kubenswrapper[5014]: I0228 04:54:37.409820 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bqrsc\" (UniqueName: \"kubernetes.io/projected/d5ba47cb-6efc-46ac-97df-b895cac925a3-kube-api-access-bqrsc\") pod \"d5ba47cb-6efc-46ac-97df-b895cac925a3\" (UID: \"d5ba47cb-6efc-46ac-97df-b895cac925a3\") " Feb 28 04:54:37 crc kubenswrapper[5014]: I0228 04:54:37.409361 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5ba47cb-6efc-46ac-97df-b895cac925a3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d5ba47cb-6efc-46ac-97df-b895cac925a3" (UID: "d5ba47cb-6efc-46ac-97df-b895cac925a3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:54:37 crc kubenswrapper[5014]: I0228 04:54:37.410785 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1d0cd80-c46f-4f36-904f-ce3128cc997f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c1d0cd80-c46f-4f36-904f-ce3128cc997f" (UID: "c1d0cd80-c46f-4f36-904f-ce3128cc997f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:54:37 crc kubenswrapper[5014]: I0228 04:54:37.412842 5014 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5ba47cb-6efc-46ac-97df-b895cac925a3-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:37 crc kubenswrapper[5014]: I0228 04:54:37.413467 5014 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c1d0cd80-c46f-4f36-904f-ce3128cc997f-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:37 crc kubenswrapper[5014]: I0228 04:54:37.413017 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1d0cd80-c46f-4f36-904f-ce3128cc997f-kube-api-access-4kzx4" (OuterVolumeSpecName: "kube-api-access-4kzx4") pod "c1d0cd80-c46f-4f36-904f-ce3128cc997f" (UID: "c1d0cd80-c46f-4f36-904f-ce3128cc997f"). InnerVolumeSpecName "kube-api-access-4kzx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:54:37 crc kubenswrapper[5014]: I0228 04:54:37.414541 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5ba47cb-6efc-46ac-97df-b895cac925a3-kube-api-access-bqrsc" (OuterVolumeSpecName: "kube-api-access-bqrsc") pod "d5ba47cb-6efc-46ac-97df-b895cac925a3" (UID: "d5ba47cb-6efc-46ac-97df-b895cac925a3"). InnerVolumeSpecName "kube-api-access-bqrsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:54:37 crc kubenswrapper[5014]: I0228 04:54:37.516597 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bqrsc\" (UniqueName: \"kubernetes.io/projected/d5ba47cb-6efc-46ac-97df-b895cac925a3-kube-api-access-bqrsc\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:37 crc kubenswrapper[5014]: I0228 04:54:37.516884 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kzx4\" (UniqueName: \"kubernetes.io/projected/c1d0cd80-c46f-4f36-904f-ce3128cc997f-kube-api-access-4kzx4\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:37 crc kubenswrapper[5014]: I0228 04:54:37.675140 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-36df-account-create-update-wx84t" event={"ID":"d5ba47cb-6efc-46ac-97df-b895cac925a3","Type":"ContainerDied","Data":"818dc48eaedf9d587d9dda89e67583d65f887dea75d5b6f4fa6e30b99f6e9c56"} Feb 28 04:54:37 crc kubenswrapper[5014]: I0228 04:54:37.675181 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="818dc48eaedf9d587d9dda89e67583d65f887dea75d5b6f4fa6e30b99f6e9c56" Feb 28 04:54:37 crc kubenswrapper[5014]: I0228 04:54:37.675766 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-36df-account-create-update-wx84t" Feb 28 04:54:37 crc kubenswrapper[5014]: I0228 04:54:37.677339 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-ceba-account-create-update-cjxpn" event={"ID":"c1d0cd80-c46f-4f36-904f-ce3128cc997f","Type":"ContainerDied","Data":"8bb607c7e2c3f077b2fc2fc023035d134023fef50acba55c7075946b8d2319c8"} Feb 28 04:54:37 crc kubenswrapper[5014]: I0228 04:54:37.677380 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bb607c7e2c3f077b2fc2fc023035d134023fef50acba55c7075946b8d2319c8" Feb 28 04:54:37 crc kubenswrapper[5014]: I0228 04:54:37.677394 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-ceba-account-create-update-cjxpn" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.159821 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-txmbr"] Feb 28 04:54:38 crc kubenswrapper[5014]: E0228 04:54:38.160626 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5ba47cb-6efc-46ac-97df-b895cac925a3" containerName="mariadb-account-create-update" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.160646 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5ba47cb-6efc-46ac-97df-b895cac925a3" containerName="mariadb-account-create-update" Feb 28 04:54:38 crc kubenswrapper[5014]: E0228 04:54:38.160662 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d9ce69c-9aeb-4120-9abb-d052b56ff801" containerName="mariadb-database-create" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.160671 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d9ce69c-9aeb-4120-9abb-d052b56ff801" containerName="mariadb-database-create" Feb 28 04:54:38 crc kubenswrapper[5014]: E0228 04:54:38.160688 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fd00cf3-841a-4ecc-b28c-8ba9d6d00894" containerName="mariadb-account-create-update" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.160698 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fd00cf3-841a-4ecc-b28c-8ba9d6d00894" containerName="mariadb-account-create-update" Feb 28 04:54:38 crc kubenswrapper[5014]: E0228 04:54:38.160710 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00ee4598-7f76-410b-8737-7086fd0b5aad" containerName="neutron-httpd" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.160719 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="00ee4598-7f76-410b-8737-7086fd0b5aad" containerName="neutron-httpd" Feb 28 04:54:38 crc kubenswrapper[5014]: E0228 04:54:38.160732 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bf06a59-bec2-4829-bf19-65ed9856d251" containerName="mariadb-database-create" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.160740 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bf06a59-bec2-4829-bf19-65ed9856d251" containerName="mariadb-database-create" Feb 28 04:54:38 crc kubenswrapper[5014]: E0228 04:54:38.160750 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1d0cd80-c46f-4f36-904f-ce3128cc997f" containerName="mariadb-account-create-update" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.160758 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1d0cd80-c46f-4f36-904f-ce3128cc997f" containerName="mariadb-account-create-update" Feb 28 04:54:38 crc kubenswrapper[5014]: E0228 04:54:38.160769 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00ee4598-7f76-410b-8737-7086fd0b5aad" containerName="neutron-api" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.160777 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="00ee4598-7f76-410b-8737-7086fd0b5aad" containerName="neutron-api" Feb 28 04:54:38 crc kubenswrapper[5014]: E0228 04:54:38.162397 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00673aaf-5abc-4e06-91dd-8a1d71a5e726" containerName="mariadb-database-create" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.162436 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="00673aaf-5abc-4e06-91dd-8a1d71a5e726" containerName="mariadb-database-create" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.162714 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="00ee4598-7f76-410b-8737-7086fd0b5aad" containerName="neutron-httpd" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.162739 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="00ee4598-7f76-410b-8737-7086fd0b5aad" containerName="neutron-api" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.162757 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fd00cf3-841a-4ecc-b28c-8ba9d6d00894" containerName="mariadb-account-create-update" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.162770 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d9ce69c-9aeb-4120-9abb-d052b56ff801" containerName="mariadb-database-create" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.162787 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5ba47cb-6efc-46ac-97df-b895cac925a3" containerName="mariadb-account-create-update" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.162797 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1d0cd80-c46f-4f36-904f-ce3128cc997f" containerName="mariadb-account-create-update" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.162826 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="00673aaf-5abc-4e06-91dd-8a1d71a5e726" containerName="mariadb-database-create" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.162841 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bf06a59-bec2-4829-bf19-65ed9856d251" containerName="mariadb-database-create" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.163534 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-txmbr" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.168477 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-59rrm" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.169748 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.175558 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.232334 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-txmbr"] Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.325888 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69e68ab2-dae2-4ebe-9820-c945a9897363-config-data\") pod \"nova-cell0-conductor-db-sync-txmbr\" (UID: \"69e68ab2-dae2-4ebe-9820-c945a9897363\") " pod="openstack/nova-cell0-conductor-db-sync-txmbr" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.327401 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69e68ab2-dae2-4ebe-9820-c945a9897363-scripts\") pod \"nova-cell0-conductor-db-sync-txmbr\" (UID: \"69e68ab2-dae2-4ebe-9820-c945a9897363\") " pod="openstack/nova-cell0-conductor-db-sync-txmbr" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.327589 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lng95\" (UniqueName: \"kubernetes.io/projected/69e68ab2-dae2-4ebe-9820-c945a9897363-kube-api-access-lng95\") pod \"nova-cell0-conductor-db-sync-txmbr\" (UID: \"69e68ab2-dae2-4ebe-9820-c945a9897363\") " pod="openstack/nova-cell0-conductor-db-sync-txmbr" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.327678 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69e68ab2-dae2-4ebe-9820-c945a9897363-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-txmbr\" (UID: \"69e68ab2-dae2-4ebe-9820-c945a9897363\") " pod="openstack/nova-cell0-conductor-db-sync-txmbr" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.429490 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69e68ab2-dae2-4ebe-9820-c945a9897363-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-txmbr\" (UID: \"69e68ab2-dae2-4ebe-9820-c945a9897363\") " pod="openstack/nova-cell0-conductor-db-sync-txmbr" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.429560 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69e68ab2-dae2-4ebe-9820-c945a9897363-config-data\") pod \"nova-cell0-conductor-db-sync-txmbr\" (UID: \"69e68ab2-dae2-4ebe-9820-c945a9897363\") " pod="openstack/nova-cell0-conductor-db-sync-txmbr" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.429656 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69e68ab2-dae2-4ebe-9820-c945a9897363-scripts\") pod \"nova-cell0-conductor-db-sync-txmbr\" (UID: \"69e68ab2-dae2-4ebe-9820-c945a9897363\") " pod="openstack/nova-cell0-conductor-db-sync-txmbr" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.429694 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lng95\" (UniqueName: \"kubernetes.io/projected/69e68ab2-dae2-4ebe-9820-c945a9897363-kube-api-access-lng95\") pod \"nova-cell0-conductor-db-sync-txmbr\" (UID: \"69e68ab2-dae2-4ebe-9820-c945a9897363\") " pod="openstack/nova-cell0-conductor-db-sync-txmbr" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.435720 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69e68ab2-dae2-4ebe-9820-c945a9897363-config-data\") pod \"nova-cell0-conductor-db-sync-txmbr\" (UID: \"69e68ab2-dae2-4ebe-9820-c945a9897363\") " pod="openstack/nova-cell0-conductor-db-sync-txmbr" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.436536 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69e68ab2-dae2-4ebe-9820-c945a9897363-scripts\") pod \"nova-cell0-conductor-db-sync-txmbr\" (UID: \"69e68ab2-dae2-4ebe-9820-c945a9897363\") " pod="openstack/nova-cell0-conductor-db-sync-txmbr" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.453204 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lng95\" (UniqueName: \"kubernetes.io/projected/69e68ab2-dae2-4ebe-9820-c945a9897363-kube-api-access-lng95\") pod \"nova-cell0-conductor-db-sync-txmbr\" (UID: \"69e68ab2-dae2-4ebe-9820-c945a9897363\") " pod="openstack/nova-cell0-conductor-db-sync-txmbr" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.469482 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69e68ab2-dae2-4ebe-9820-c945a9897363-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-txmbr\" (UID: \"69e68ab2-dae2-4ebe-9820-c945a9897363\") " pod="openstack/nova-cell0-conductor-db-sync-txmbr" Feb 28 04:54:38 crc kubenswrapper[5014]: I0228 04:54:38.532144 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-txmbr" Feb 28 04:54:40 crc kubenswrapper[5014]: I0228 04:54:40.706154 5014 generic.go:334] "Generic (PLEG): container finished" podID="e2d99d0c-9a87-4d80-8105-5c86158f6770" containerID="26dd67cef0700c1f97a6ae12163f2280e25f790cb69a3e7e22558e232cd6bd01" exitCode=137 Feb 28 04:54:40 crc kubenswrapper[5014]: I0228 04:54:40.706206 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2d99d0c-9a87-4d80-8105-5c86158f6770","Type":"ContainerDied","Data":"26dd67cef0700c1f97a6ae12163f2280e25f790cb69a3e7e22558e232cd6bd01"} Feb 28 04:54:41 crc kubenswrapper[5014]: I0228 04:54:41.621551 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="e2d99d0c-9a87-4d80-8105-5c86158f6770" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.139:3000/\": dial tcp 10.217.0.139:3000: connect: connection refused" Feb 28 04:54:42 crc kubenswrapper[5014]: I0228 04:54:42.763251 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:42 crc kubenswrapper[5014]: I0228 04:54:42.773284 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6c68684b95-vvvhf" Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.656041 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.737416 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"dae41ad3-a997-4a4a-91ab-34175d98fb97","Type":"ContainerStarted","Data":"ac1812ec8dbbe2de1bc0c9efa87c1a52a6fd03f9d3972838bb0d098fa9e4d2dd"} Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.741106 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e2d99d0c-9a87-4d80-8105-5c86158f6770","Type":"ContainerDied","Data":"5c29a9dcdf73310ca958d248b363a67133714e707d1c1dd02702d60166510deb"} Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.741193 5014 scope.go:117] "RemoveContainer" containerID="26dd67cef0700c1f97a6ae12163f2280e25f790cb69a3e7e22558e232cd6bd01" Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.741349 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.771666 5014 scope.go:117] "RemoveContainer" containerID="5c6b8b923b2e0f76fe1b4cbbe0d395ecdb86b62f7a043c552751f388c76968e3" Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.791309 5014 scope.go:117] "RemoveContainer" containerID="df0a00a59040905d57860047e9263d3015a68c94ca415d4eb5741d25b71aefc0" Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.822108 5014 scope.go:117] "RemoveContainer" containerID="5285b11c7f63ea45c0b337d406c4345d1d6dd50a696f0fc66b1291c91ecf9739" Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.846831 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2d99d0c-9a87-4d80-8105-5c86158f6770-sg-core-conf-yaml\") pod \"e2d99d0c-9a87-4d80-8105-5c86158f6770\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.846918 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2d99d0c-9a87-4d80-8105-5c86158f6770-run-httpd\") pod \"e2d99d0c-9a87-4d80-8105-5c86158f6770\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.846940 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhzqt\" (UniqueName: \"kubernetes.io/projected/e2d99d0c-9a87-4d80-8105-5c86158f6770-kube-api-access-vhzqt\") pod \"e2d99d0c-9a87-4d80-8105-5c86158f6770\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.846973 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2d99d0c-9a87-4d80-8105-5c86158f6770-log-httpd\") pod \"e2d99d0c-9a87-4d80-8105-5c86158f6770\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.847063 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.7234668389999999 podStartE2EDuration="11.8470444s" podCreationTimestamp="2026-02-28 04:54:32 +0000 UTC" firstStartedPulling="2026-02-28 04:54:33.277611265 +0000 UTC m=+1261.947737175" lastFinishedPulling="2026-02-28 04:54:43.401188826 +0000 UTC m=+1272.071314736" observedRunningTime="2026-02-28 04:54:43.767092725 +0000 UTC m=+1272.437218625" watchObservedRunningTime="2026-02-28 04:54:43.8470444 +0000 UTC m=+1272.517170310" Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.847201 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d99d0c-9a87-4d80-8105-5c86158f6770-config-data\") pod \"e2d99d0c-9a87-4d80-8105-5c86158f6770\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.847239 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d99d0c-9a87-4d80-8105-5c86158f6770-combined-ca-bundle\") pod \"e2d99d0c-9a87-4d80-8105-5c86158f6770\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.847257 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d99d0c-9a87-4d80-8105-5c86158f6770-scripts\") pod \"e2d99d0c-9a87-4d80-8105-5c86158f6770\" (UID: \"e2d99d0c-9a87-4d80-8105-5c86158f6770\") " Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.849385 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2d99d0c-9a87-4d80-8105-5c86158f6770-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e2d99d0c-9a87-4d80-8105-5c86158f6770" (UID: "e2d99d0c-9a87-4d80-8105-5c86158f6770"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.850436 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2d99d0c-9a87-4d80-8105-5c86158f6770-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e2d99d0c-9a87-4d80-8105-5c86158f6770" (UID: "e2d99d0c-9a87-4d80-8105-5c86158f6770"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.851723 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-txmbr"] Feb 28 04:54:43 crc kubenswrapper[5014]: W0228 04:54:43.852046 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69e68ab2_dae2_4ebe_9820_c945a9897363.slice/crio-765cd64a1d3547711b4256d3080d253a4a12bd81e0c49349a66d997dfee7d8be WatchSource:0}: Error finding container 765cd64a1d3547711b4256d3080d253a4a12bd81e0c49349a66d997dfee7d8be: Status 404 returned error can't find the container with id 765cd64a1d3547711b4256d3080d253a4a12bd81e0c49349a66d997dfee7d8be Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.854925 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d99d0c-9a87-4d80-8105-5c86158f6770-scripts" (OuterVolumeSpecName: "scripts") pod "e2d99d0c-9a87-4d80-8105-5c86158f6770" (UID: "e2d99d0c-9a87-4d80-8105-5c86158f6770"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.856196 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d99d0c-9a87-4d80-8105-5c86158f6770-kube-api-access-vhzqt" (OuterVolumeSpecName: "kube-api-access-vhzqt") pod "e2d99d0c-9a87-4d80-8105-5c86158f6770" (UID: "e2d99d0c-9a87-4d80-8105-5c86158f6770"). InnerVolumeSpecName "kube-api-access-vhzqt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.879241 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d99d0c-9a87-4d80-8105-5c86158f6770-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e2d99d0c-9a87-4d80-8105-5c86158f6770" (UID: "e2d99d0c-9a87-4d80-8105-5c86158f6770"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.937059 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d99d0c-9a87-4d80-8105-5c86158f6770-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2d99d0c-9a87-4d80-8105-5c86158f6770" (UID: "e2d99d0c-9a87-4d80-8105-5c86158f6770"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.950054 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2d99d0c-9a87-4d80-8105-5c86158f6770-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.950080 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2d99d0c-9a87-4d80-8105-5c86158f6770-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.950090 5014 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e2d99d0c-9a87-4d80-8105-5c86158f6770-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.950100 5014 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2d99d0c-9a87-4d80-8105-5c86158f6770-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.950109 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhzqt\" (UniqueName: \"kubernetes.io/projected/e2d99d0c-9a87-4d80-8105-5c86158f6770-kube-api-access-vhzqt\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.950118 5014 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e2d99d0c-9a87-4d80-8105-5c86158f6770-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:43 crc kubenswrapper[5014]: I0228 04:54:43.961360 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d99d0c-9a87-4d80-8105-5c86158f6770-config-data" (OuterVolumeSpecName: "config-data") pod "e2d99d0c-9a87-4d80-8105-5c86158f6770" (UID: "e2d99d0c-9a87-4d80-8105-5c86158f6770"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.065020 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2d99d0c-9a87-4d80-8105-5c86158f6770-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.155862 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.171705 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.197385 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2d99d0c-9a87-4d80-8105-5c86158f6770" path="/var/lib/kubelet/pods/e2d99d0c-9a87-4d80-8105-5c86158f6770/volumes" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.207174 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:54:44 crc kubenswrapper[5014]: E0228 04:54:44.207482 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d99d0c-9a87-4d80-8105-5c86158f6770" containerName="sg-core" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.207494 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d99d0c-9a87-4d80-8105-5c86158f6770" containerName="sg-core" Feb 28 04:54:44 crc kubenswrapper[5014]: E0228 04:54:44.207507 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d99d0c-9a87-4d80-8105-5c86158f6770" containerName="ceilometer-notification-agent" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.207513 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d99d0c-9a87-4d80-8105-5c86158f6770" containerName="ceilometer-notification-agent" Feb 28 04:54:44 crc kubenswrapper[5014]: E0228 04:54:44.207525 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d99d0c-9a87-4d80-8105-5c86158f6770" containerName="ceilometer-central-agent" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.207534 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d99d0c-9a87-4d80-8105-5c86158f6770" containerName="ceilometer-central-agent" Feb 28 04:54:44 crc kubenswrapper[5014]: E0228 04:54:44.207542 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d99d0c-9a87-4d80-8105-5c86158f6770" containerName="proxy-httpd" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.207548 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d99d0c-9a87-4d80-8105-5c86158f6770" containerName="proxy-httpd" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.207715 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d99d0c-9a87-4d80-8105-5c86158f6770" containerName="ceilometer-central-agent" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.207726 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d99d0c-9a87-4d80-8105-5c86158f6770" containerName="proxy-httpd" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.207738 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d99d0c-9a87-4d80-8105-5c86158f6770" containerName="ceilometer-notification-agent" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.207746 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d99d0c-9a87-4d80-8105-5c86158f6770" containerName="sg-core" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.209240 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.209326 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.211752 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.211987 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.369682 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-config-data\") pod \"ceilometer-0\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " pod="openstack/ceilometer-0" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.369737 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-log-httpd\") pod \"ceilometer-0\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " pod="openstack/ceilometer-0" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.369951 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-run-httpd\") pod \"ceilometer-0\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " pod="openstack/ceilometer-0" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.370126 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-scripts\") pod \"ceilometer-0\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " pod="openstack/ceilometer-0" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.370257 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " pod="openstack/ceilometer-0" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.370289 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " pod="openstack/ceilometer-0" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.370349 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d7x9\" (UniqueName: \"kubernetes.io/projected/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-kube-api-access-2d7x9\") pod \"ceilometer-0\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " pod="openstack/ceilometer-0" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.472524 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-run-httpd\") pod \"ceilometer-0\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " pod="openstack/ceilometer-0" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.472604 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-scripts\") pod \"ceilometer-0\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " pod="openstack/ceilometer-0" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.472667 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " pod="openstack/ceilometer-0" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.472686 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " pod="openstack/ceilometer-0" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.472718 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d7x9\" (UniqueName: \"kubernetes.io/projected/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-kube-api-access-2d7x9\") pod \"ceilometer-0\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " pod="openstack/ceilometer-0" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.472771 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-config-data\") pod \"ceilometer-0\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " pod="openstack/ceilometer-0" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.472788 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-log-httpd\") pod \"ceilometer-0\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " pod="openstack/ceilometer-0" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.473198 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-log-httpd\") pod \"ceilometer-0\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " pod="openstack/ceilometer-0" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.473613 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-run-httpd\") pod \"ceilometer-0\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " pod="openstack/ceilometer-0" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.477947 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " pod="openstack/ceilometer-0" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.478116 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-scripts\") pod \"ceilometer-0\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " pod="openstack/ceilometer-0" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.479163 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-config-data\") pod \"ceilometer-0\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " pod="openstack/ceilometer-0" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.485525 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " pod="openstack/ceilometer-0" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.494529 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d7x9\" (UniqueName: \"kubernetes.io/projected/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-kube-api-access-2d7x9\") pod \"ceilometer-0\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " pod="openstack/ceilometer-0" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.519227 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-6cbc78cbb4-6wlp7" podUID="80e6122e-74aa-4ee6-a7a3-4af495cb55b7" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.150:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.150:8443: connect: connection refused" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.530510 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.754961 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-txmbr" event={"ID":"69e68ab2-dae2-4ebe-9820-c945a9897363","Type":"ContainerStarted","Data":"765cd64a1d3547711b4256d3080d253a4a12bd81e0c49349a66d997dfee7d8be"} Feb 28 04:54:44 crc kubenswrapper[5014]: I0228 04:54:44.995007 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:54:45 crc kubenswrapper[5014]: I0228 04:54:45.777353 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c","Type":"ContainerStarted","Data":"870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8"} Feb 28 04:54:45 crc kubenswrapper[5014]: I0228 04:54:45.777610 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c","Type":"ContainerStarted","Data":"93f2a4223811e8fd9dcc7c361b080eba544ef913d91f77f37ae111ad0589f3d4"} Feb 28 04:54:46 crc kubenswrapper[5014]: I0228 04:54:46.787627 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c","Type":"ContainerStarted","Data":"ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33"} Feb 28 04:54:47 crc kubenswrapper[5014]: I0228 04:54:47.144271 5014 scope.go:117] "RemoveContainer" containerID="5f1a677503627726f8500aa93d09bc9493c95a61fd1567c361904b444c215213" Feb 28 04:54:49 crc kubenswrapper[5014]: I0228 04:54:49.779406 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 04:54:49 crc kubenswrapper[5014]: I0228 04:54:49.780242 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f4a0fe1f-df1b-44ad-bab1-71610e650357" containerName="glance-log" containerID="cri-o://7e7337524f7dc89d1658cce858b13c9109af9e8fddadf430a18536cc94ddde08" gracePeriod=30 Feb 28 04:54:49 crc kubenswrapper[5014]: I0228 04:54:49.780383 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f4a0fe1f-df1b-44ad-bab1-71610e650357" containerName="glance-httpd" containerID="cri-o://25d5dbc4b4de187af03422ccdf5a83f5e3a3d14e5bc1e594a17da340080d03de" gracePeriod=30 Feb 28 04:54:49 crc kubenswrapper[5014]: I0228 04:54:49.820730 5014 generic.go:334] "Generic (PLEG): container finished" podID="80e6122e-74aa-4ee6-a7a3-4af495cb55b7" containerID="76ed047bf90263787959b88328e36777c017c0f8dd1ff494685dddd105e6d8cd" exitCode=137 Feb 28 04:54:49 crc kubenswrapper[5014]: I0228 04:54:49.820778 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cbc78cbb4-6wlp7" event={"ID":"80e6122e-74aa-4ee6-a7a3-4af495cb55b7","Type":"ContainerDied","Data":"76ed047bf90263787959b88328e36777c017c0f8dd1ff494685dddd105e6d8cd"} Feb 28 04:54:50 crc kubenswrapper[5014]: I0228 04:54:50.834232 5014 generic.go:334] "Generic (PLEG): container finished" podID="f4a0fe1f-df1b-44ad-bab1-71610e650357" containerID="7e7337524f7dc89d1658cce858b13c9109af9e8fddadf430a18536cc94ddde08" exitCode=143 Feb 28 04:54:50 crc kubenswrapper[5014]: I0228 04:54:50.834286 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f4a0fe1f-df1b-44ad-bab1-71610e650357","Type":"ContainerDied","Data":"7e7337524f7dc89d1658cce858b13c9109af9e8fddadf430a18536cc94ddde08"} Feb 28 04:54:51 crc kubenswrapper[5014]: I0228 04:54:51.909648 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 04:54:51 crc kubenswrapper[5014]: I0228 04:54:51.911609 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d6f52a80-76bd-4c20-a619-2926065d7824" containerName="glance-log" containerID="cri-o://3d26f8527acbd6672331b9c2d4d418035dbc87e4072030173ebbbdfee80972e7" gracePeriod=30 Feb 28 04:54:51 crc kubenswrapper[5014]: I0228 04:54:51.911669 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d6f52a80-76bd-4c20-a619-2926065d7824" containerName="glance-httpd" containerID="cri-o://73de03346cc98ab1dfd17fcf7da1f55cc345dd690c4d5ca09632c5e9607a6de4" gracePeriod=30 Feb 28 04:54:52 crc kubenswrapper[5014]: I0228 04:54:52.853374 5014 generic.go:334] "Generic (PLEG): container finished" podID="d6f52a80-76bd-4c20-a619-2926065d7824" containerID="3d26f8527acbd6672331b9c2d4d418035dbc87e4072030173ebbbdfee80972e7" exitCode=143 Feb 28 04:54:52 crc kubenswrapper[5014]: I0228 04:54:52.853664 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6f52a80-76bd-4c20-a619-2926065d7824","Type":"ContainerDied","Data":"3d26f8527acbd6672331b9c2d4d418035dbc87e4072030173ebbbdfee80972e7"} Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.325459 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.434276 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-horizon-secret-key\") pod \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.434413 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lq49q\" (UniqueName: \"kubernetes.io/projected/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-kube-api-access-lq49q\") pod \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.434435 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-horizon-tls-certs\") pod \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.434981 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-config-data\") pod \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.435019 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-logs\") pod \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.435116 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-combined-ca-bundle\") pod \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.435165 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-scripts\") pod \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\" (UID: \"80e6122e-74aa-4ee6-a7a3-4af495cb55b7\") " Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.436115 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-logs" (OuterVolumeSpecName: "logs") pod "80e6122e-74aa-4ee6-a7a3-4af495cb55b7" (UID: "80e6122e-74aa-4ee6-a7a3-4af495cb55b7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.517599 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "80e6122e-74aa-4ee6-a7a3-4af495cb55b7" (UID: "80e6122e-74aa-4ee6-a7a3-4af495cb55b7"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.533063 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-kube-api-access-lq49q" (OuterVolumeSpecName: "kube-api-access-lq49q") pod "80e6122e-74aa-4ee6-a7a3-4af495cb55b7" (UID: "80e6122e-74aa-4ee6-a7a3-4af495cb55b7"). InnerVolumeSpecName "kube-api-access-lq49q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.534657 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-config-data" (OuterVolumeSpecName: "config-data") pod "80e6122e-74aa-4ee6-a7a3-4af495cb55b7" (UID: "80e6122e-74aa-4ee6-a7a3-4af495cb55b7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.537240 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "80e6122e-74aa-4ee6-a7a3-4af495cb55b7" (UID: "80e6122e-74aa-4ee6-a7a3-4af495cb55b7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.537640 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-scripts" (OuterVolumeSpecName: "scripts") pod "80e6122e-74aa-4ee6-a7a3-4af495cb55b7" (UID: "80e6122e-74aa-4ee6-a7a3-4af495cb55b7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.537647 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.537684 5014 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.537693 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lq49q\" (UniqueName: \"kubernetes.io/projected/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-kube-api-access-lq49q\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.537705 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.537715 5014 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-logs\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.579107 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "80e6122e-74aa-4ee6-a7a3-4af495cb55b7" (UID: "80e6122e-74aa-4ee6-a7a3-4af495cb55b7"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.639647 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.639902 5014 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/80e6122e-74aa-4ee6-a7a3-4af495cb55b7-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.835306 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.842467 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a0fe1f-df1b-44ad-bab1-71610e650357-public-tls-certs\") pod \"f4a0fe1f-df1b-44ad-bab1-71610e650357\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.842570 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmzs7\" (UniqueName: \"kubernetes.io/projected/f4a0fe1f-df1b-44ad-bab1-71610e650357-kube-api-access-dmzs7\") pod \"f4a0fe1f-df1b-44ad-bab1-71610e650357\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.842603 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f4a0fe1f-df1b-44ad-bab1-71610e650357-httpd-run\") pod \"f4a0fe1f-df1b-44ad-bab1-71610e650357\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.843233 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4a0fe1f-df1b-44ad-bab1-71610e650357-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f4a0fe1f-df1b-44ad-bab1-71610e650357" (UID: "f4a0fe1f-df1b-44ad-bab1-71610e650357"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.843310 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4a0fe1f-df1b-44ad-bab1-71610e650357-scripts\") pod \"f4a0fe1f-df1b-44ad-bab1-71610e650357\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.843758 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4a0fe1f-df1b-44ad-bab1-71610e650357-combined-ca-bundle\") pod \"f4a0fe1f-df1b-44ad-bab1-71610e650357\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.844087 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"f4a0fe1f-df1b-44ad-bab1-71610e650357\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.844131 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4a0fe1f-df1b-44ad-bab1-71610e650357-config-data\") pod \"f4a0fe1f-df1b-44ad-bab1-71610e650357\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.844161 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4a0fe1f-df1b-44ad-bab1-71610e650357-logs\") pod \"f4a0fe1f-df1b-44ad-bab1-71610e650357\" (UID: \"f4a0fe1f-df1b-44ad-bab1-71610e650357\") " Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.844833 5014 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f4a0fe1f-df1b-44ad-bab1-71610e650357-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.848138 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4a0fe1f-df1b-44ad-bab1-71610e650357-logs" (OuterVolumeSpecName: "logs") pod "f4a0fe1f-df1b-44ad-bab1-71610e650357" (UID: "f4a0fe1f-df1b-44ad-bab1-71610e650357"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.848867 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a0fe1f-df1b-44ad-bab1-71610e650357-scripts" (OuterVolumeSpecName: "scripts") pod "f4a0fe1f-df1b-44ad-bab1-71610e650357" (UID: "f4a0fe1f-df1b-44ad-bab1-71610e650357"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.850070 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4a0fe1f-df1b-44ad-bab1-71610e650357-kube-api-access-dmzs7" (OuterVolumeSpecName: "kube-api-access-dmzs7") pod "f4a0fe1f-df1b-44ad-bab1-71610e650357" (UID: "f4a0fe1f-df1b-44ad-bab1-71610e650357"). InnerVolumeSpecName "kube-api-access-dmzs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.856159 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "f4a0fe1f-df1b-44ad-bab1-71610e650357" (UID: "f4a0fe1f-df1b-44ad-bab1-71610e650357"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.894951 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6cbc78cbb4-6wlp7" event={"ID":"80e6122e-74aa-4ee6-a7a3-4af495cb55b7","Type":"ContainerDied","Data":"ccddd8be70135de8e3d35c92256bc91174f8141e4ba7beff56dee84bd7a7ece3"} Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.895011 5014 scope.go:117] "RemoveContainer" containerID="e0ca2cc31bef32f1a8996357e09afc4440944891b9575e4c249702b104fa3fa9" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.895165 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6cbc78cbb4-6wlp7" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.904098 5014 generic.go:334] "Generic (PLEG): container finished" podID="f4a0fe1f-df1b-44ad-bab1-71610e650357" containerID="25d5dbc4b4de187af03422ccdf5a83f5e3a3d14e5bc1e594a17da340080d03de" exitCode=0 Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.904154 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f4a0fe1f-df1b-44ad-bab1-71610e650357","Type":"ContainerDied","Data":"25d5dbc4b4de187af03422ccdf5a83f5e3a3d14e5bc1e594a17da340080d03de"} Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.904186 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f4a0fe1f-df1b-44ad-bab1-71610e650357","Type":"ContainerDied","Data":"c535a16d4c28041833f39ce7c2c0d3763a658c124acf92cf428af777db5241c4"} Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.904263 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.953997 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmzs7\" (UniqueName: \"kubernetes.io/projected/f4a0fe1f-df1b-44ad-bab1-71610e650357-kube-api-access-dmzs7\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.954039 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4a0fe1f-df1b-44ad-bab1-71610e650357-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.954074 5014 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.954086 5014 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4a0fe1f-df1b-44ad-bab1-71610e650357-logs\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.968953 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6cbc78cbb4-6wlp7"] Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.984314 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6cbc78cbb4-6wlp7"] Feb 28 04:54:53 crc kubenswrapper[5014]: I0228 04:54:53.991123 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a0fe1f-df1b-44ad-bab1-71610e650357-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4a0fe1f-df1b-44ad-bab1-71610e650357" (UID: "f4a0fe1f-df1b-44ad-bab1-71610e650357"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.026176 5014 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.057553 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4a0fe1f-df1b-44ad-bab1-71610e650357-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.057603 5014 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.066908 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a0fe1f-df1b-44ad-bab1-71610e650357-config-data" (OuterVolumeSpecName: "config-data") pod "f4a0fe1f-df1b-44ad-bab1-71610e650357" (UID: "f4a0fe1f-df1b-44ad-bab1-71610e650357"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.071492 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4a0fe1f-df1b-44ad-bab1-71610e650357-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f4a0fe1f-df1b-44ad-bab1-71610e650357" (UID: "f4a0fe1f-df1b-44ad-bab1-71610e650357"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.079027 5014 scope.go:117] "RemoveContainer" containerID="76ed047bf90263787959b88328e36777c017c0f8dd1ff494685dddd105e6d8cd" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.101066 5014 scope.go:117] "RemoveContainer" containerID="25d5dbc4b4de187af03422ccdf5a83f5e3a3d14e5bc1e594a17da340080d03de" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.125504 5014 scope.go:117] "RemoveContainer" containerID="7e7337524f7dc89d1658cce858b13c9109af9e8fddadf430a18536cc94ddde08" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.143785 5014 scope.go:117] "RemoveContainer" containerID="25d5dbc4b4de187af03422ccdf5a83f5e3a3d14e5bc1e594a17da340080d03de" Feb 28 04:54:54 crc kubenswrapper[5014]: E0228 04:54:54.144226 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25d5dbc4b4de187af03422ccdf5a83f5e3a3d14e5bc1e594a17da340080d03de\": container with ID starting with 25d5dbc4b4de187af03422ccdf5a83f5e3a3d14e5bc1e594a17da340080d03de not found: ID does not exist" containerID="25d5dbc4b4de187af03422ccdf5a83f5e3a3d14e5bc1e594a17da340080d03de" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.144266 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25d5dbc4b4de187af03422ccdf5a83f5e3a3d14e5bc1e594a17da340080d03de"} err="failed to get container status \"25d5dbc4b4de187af03422ccdf5a83f5e3a3d14e5bc1e594a17da340080d03de\": rpc error: code = NotFound desc = could not find container \"25d5dbc4b4de187af03422ccdf5a83f5e3a3d14e5bc1e594a17da340080d03de\": container with ID starting with 25d5dbc4b4de187af03422ccdf5a83f5e3a3d14e5bc1e594a17da340080d03de not found: ID does not exist" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.144298 5014 scope.go:117] "RemoveContainer" containerID="7e7337524f7dc89d1658cce858b13c9109af9e8fddadf430a18536cc94ddde08" Feb 28 04:54:54 crc kubenswrapper[5014]: E0228 04:54:54.145307 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e7337524f7dc89d1658cce858b13c9109af9e8fddadf430a18536cc94ddde08\": container with ID starting with 7e7337524f7dc89d1658cce858b13c9109af9e8fddadf430a18536cc94ddde08 not found: ID does not exist" containerID="7e7337524f7dc89d1658cce858b13c9109af9e8fddadf430a18536cc94ddde08" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.145333 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e7337524f7dc89d1658cce858b13c9109af9e8fddadf430a18536cc94ddde08"} err="failed to get container status \"7e7337524f7dc89d1658cce858b13c9109af9e8fddadf430a18536cc94ddde08\": rpc error: code = NotFound desc = could not find container \"7e7337524f7dc89d1658cce858b13c9109af9e8fddadf430a18536cc94ddde08\": container with ID starting with 7e7337524f7dc89d1658cce858b13c9109af9e8fddadf430a18536cc94ddde08 not found: ID does not exist" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.158851 5014 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4a0fe1f-df1b-44ad-bab1-71610e650357-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.158881 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4a0fe1f-df1b-44ad-bab1-71610e650357-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.183315 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80e6122e-74aa-4ee6-a7a3-4af495cb55b7" path="/var/lib/kubelet/pods/80e6122e-74aa-4ee6-a7a3-4af495cb55b7/volumes" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.233218 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.242970 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.257247 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 04:54:54 crc kubenswrapper[5014]: E0228 04:54:54.257671 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4a0fe1f-df1b-44ad-bab1-71610e650357" containerName="glance-log" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.257695 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4a0fe1f-df1b-44ad-bab1-71610e650357" containerName="glance-log" Feb 28 04:54:54 crc kubenswrapper[5014]: E0228 04:54:54.257712 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80e6122e-74aa-4ee6-a7a3-4af495cb55b7" containerName="horizon" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.257720 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="80e6122e-74aa-4ee6-a7a3-4af495cb55b7" containerName="horizon" Feb 28 04:54:54 crc kubenswrapper[5014]: E0228 04:54:54.257736 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80e6122e-74aa-4ee6-a7a3-4af495cb55b7" containerName="horizon-log" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.257744 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="80e6122e-74aa-4ee6-a7a3-4af495cb55b7" containerName="horizon-log" Feb 28 04:54:54 crc kubenswrapper[5014]: E0228 04:54:54.257782 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4a0fe1f-df1b-44ad-bab1-71610e650357" containerName="glance-httpd" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.257790 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4a0fe1f-df1b-44ad-bab1-71610e650357" containerName="glance-httpd" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.257988 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="80e6122e-74aa-4ee6-a7a3-4af495cb55b7" containerName="horizon" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.258000 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="80e6122e-74aa-4ee6-a7a3-4af495cb55b7" containerName="horizon-log" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.258016 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4a0fe1f-df1b-44ad-bab1-71610e650357" containerName="glance-httpd" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.258030 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4a0fe1f-df1b-44ad-bab1-71610e650357" containerName="glance-log" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.259322 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.263098 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.263300 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.278608 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.361888 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2c655a1-25af-4c06-9799-01a3a9fd5e52-config-data\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") " pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.361944 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2c655a1-25af-4c06-9799-01a3a9fd5e52-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") " pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.362002 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") " pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.362089 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2c655a1-25af-4c06-9799-01a3a9fd5e52-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") " pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.362136 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2c655a1-25af-4c06-9799-01a3a9fd5e52-logs\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") " pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.362184 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxjjs\" (UniqueName: \"kubernetes.io/projected/f2c655a1-25af-4c06-9799-01a3a9fd5e52-kube-api-access-gxjjs\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") " pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.362305 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2c655a1-25af-4c06-9799-01a3a9fd5e52-scripts\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") " pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.362384 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2c655a1-25af-4c06-9799-01a3a9fd5e52-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") " pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.464049 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2c655a1-25af-4c06-9799-01a3a9fd5e52-scripts\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") " pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.464125 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2c655a1-25af-4c06-9799-01a3a9fd5e52-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") " pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.464164 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2c655a1-25af-4c06-9799-01a3a9fd5e52-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") " pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.464180 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2c655a1-25af-4c06-9799-01a3a9fd5e52-config-data\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") " pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.464217 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") " pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.464263 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2c655a1-25af-4c06-9799-01a3a9fd5e52-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") " pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.464292 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2c655a1-25af-4c06-9799-01a3a9fd5e52-logs\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") " pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.464321 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxjjs\" (UniqueName: \"kubernetes.io/projected/f2c655a1-25af-4c06-9799-01a3a9fd5e52-kube-api-access-gxjjs\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") " pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.464695 5014 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.465787 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2c655a1-25af-4c06-9799-01a3a9fd5e52-logs\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") " pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.466101 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2c655a1-25af-4c06-9799-01a3a9fd5e52-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") " pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.468280 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2c655a1-25af-4c06-9799-01a3a9fd5e52-scripts\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") " pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.469226 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2c655a1-25af-4c06-9799-01a3a9fd5e52-config-data\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") " pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.469360 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2c655a1-25af-4c06-9799-01a3a9fd5e52-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") " pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.471446 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2c655a1-25af-4c06-9799-01a3a9fd5e52-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") " pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.485075 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxjjs\" (UniqueName: \"kubernetes.io/projected/f2c655a1-25af-4c06-9799-01a3a9fd5e52-kube-api-access-gxjjs\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") " pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.492991 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"f2c655a1-25af-4c06-9799-01a3a9fd5e52\") " pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.553120 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.579585 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.914419 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-txmbr" event={"ID":"69e68ab2-dae2-4ebe-9820-c945a9897363","Type":"ContainerStarted","Data":"8a448f0ca7e013cbc317b3a7ab992f0d25fccd158f4c18c910f802df198f4a0f"} Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.919909 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c","Type":"ContainerStarted","Data":"29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf"} Feb 28 04:54:54 crc kubenswrapper[5014]: I0228 04:54:54.937471 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-txmbr" podStartSLOduration=7.132141296 podStartE2EDuration="16.937452536s" podCreationTimestamp="2026-02-28 04:54:38 +0000 UTC" firstStartedPulling="2026-02-28 04:54:43.855931209 +0000 UTC m=+1272.526057119" lastFinishedPulling="2026-02-28 04:54:53.661242449 +0000 UTC m=+1282.331368359" observedRunningTime="2026-02-28 04:54:54.936329546 +0000 UTC m=+1283.606455456" watchObservedRunningTime="2026-02-28 04:54:54.937452536 +0000 UTC m=+1283.607578436" Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.142005 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 28 04:54:55 crc kubenswrapper[5014]: W0228 04:54:55.175584 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2c655a1_25af_4c06_9799_01a3a9fd5e52.slice/crio-fed03382bc736813026eb480799c620d5e4fa03ac59f18dc24df5fff21c3cb3c WatchSource:0}: Error finding container fed03382bc736813026eb480799c620d5e4fa03ac59f18dc24df5fff21c3cb3c: Status 404 returned error can't find the container with id fed03382bc736813026eb480799c620d5e4fa03ac59f18dc24df5fff21c3cb3c Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.695547 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.793353 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6f52a80-76bd-4c20-a619-2926065d7824-combined-ca-bundle\") pod \"d6f52a80-76bd-4c20-a619-2926065d7824\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.793407 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"d6f52a80-76bd-4c20-a619-2926065d7824\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.793452 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6f52a80-76bd-4c20-a619-2926065d7824-config-data\") pod \"d6f52a80-76bd-4c20-a619-2926065d7824\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.793482 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6f52a80-76bd-4c20-a619-2926065d7824-logs\") pod \"d6f52a80-76bd-4c20-a619-2926065d7824\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.793546 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6f52a80-76bd-4c20-a619-2926065d7824-httpd-run\") pod \"d6f52a80-76bd-4c20-a619-2926065d7824\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.793576 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6f52a80-76bd-4c20-a619-2926065d7824-internal-tls-certs\") pod \"d6f52a80-76bd-4c20-a619-2926065d7824\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.793640 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xlbl\" (UniqueName: \"kubernetes.io/projected/d6f52a80-76bd-4c20-a619-2926065d7824-kube-api-access-9xlbl\") pod \"d6f52a80-76bd-4c20-a619-2926065d7824\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.793681 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6f52a80-76bd-4c20-a619-2926065d7824-scripts\") pod \"d6f52a80-76bd-4c20-a619-2926065d7824\" (UID: \"d6f52a80-76bd-4c20-a619-2926065d7824\") " Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.794680 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6f52a80-76bd-4c20-a619-2926065d7824-logs" (OuterVolumeSpecName: "logs") pod "d6f52a80-76bd-4c20-a619-2926065d7824" (UID: "d6f52a80-76bd-4c20-a619-2926065d7824"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.794704 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6f52a80-76bd-4c20-a619-2926065d7824-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d6f52a80-76bd-4c20-a619-2926065d7824" (UID: "d6f52a80-76bd-4c20-a619-2926065d7824"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.804680 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "d6f52a80-76bd-4c20-a619-2926065d7824" (UID: "d6f52a80-76bd-4c20-a619-2926065d7824"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.804693 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6f52a80-76bd-4c20-a619-2926065d7824-kube-api-access-9xlbl" (OuterVolumeSpecName: "kube-api-access-9xlbl") pod "d6f52a80-76bd-4c20-a619-2926065d7824" (UID: "d6f52a80-76bd-4c20-a619-2926065d7824"). InnerVolumeSpecName "kube-api-access-9xlbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.810958 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6f52a80-76bd-4c20-a619-2926065d7824-scripts" (OuterVolumeSpecName: "scripts") pod "d6f52a80-76bd-4c20-a619-2926065d7824" (UID: "d6f52a80-76bd-4c20-a619-2926065d7824"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.896633 5014 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6f52a80-76bd-4c20-a619-2926065d7824-logs\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.896665 5014 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6f52a80-76bd-4c20-a619-2926065d7824-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.896674 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xlbl\" (UniqueName: \"kubernetes.io/projected/d6f52a80-76bd-4c20-a619-2926065d7824-kube-api-access-9xlbl\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.896684 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6f52a80-76bd-4c20-a619-2926065d7824-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.896712 5014 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.936912 5014 generic.go:334] "Generic (PLEG): container finished" podID="d6f52a80-76bd-4c20-a619-2926065d7824" containerID="73de03346cc98ab1dfd17fcf7da1f55cc345dd690c4d5ca09632c5e9607a6de4" exitCode=0 Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.937002 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.936989 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6f52a80-76bd-4c20-a619-2926065d7824","Type":"ContainerDied","Data":"73de03346cc98ab1dfd17fcf7da1f55cc345dd690c4d5ca09632c5e9607a6de4"} Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.937133 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6f52a80-76bd-4c20-a619-2926065d7824","Type":"ContainerDied","Data":"003a808f8f5dfadb700f4924a0a0bb0112b97b3a4ed7499f6d19db76c7f108de"} Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.937187 5014 scope.go:117] "RemoveContainer" containerID="73de03346cc98ab1dfd17fcf7da1f55cc345dd690c4d5ca09632c5e9607a6de4" Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.944958 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f2c655a1-25af-4c06-9799-01a3a9fd5e52","Type":"ContainerStarted","Data":"fed03382bc736813026eb480799c620d5e4fa03ac59f18dc24df5fff21c3cb3c"} Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.948306 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6f52a80-76bd-4c20-a619-2926065d7824-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6f52a80-76bd-4c20-a619-2926065d7824" (UID: "d6f52a80-76bd-4c20-a619-2926065d7824"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.976769 5014 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.999614 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6f52a80-76bd-4c20-a619-2926065d7824-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:55 crc kubenswrapper[5014]: I0228 04:54:55.999653 5014 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.018084 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6f52a80-76bd-4c20-a619-2926065d7824-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d6f52a80-76bd-4c20-a619-2926065d7824" (UID: "d6f52a80-76bd-4c20-a619-2926065d7824"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.024399 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6f52a80-76bd-4c20-a619-2926065d7824-config-data" (OuterVolumeSpecName: "config-data") pod "d6f52a80-76bd-4c20-a619-2926065d7824" (UID: "d6f52a80-76bd-4c20-a619-2926065d7824"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.076561 5014 scope.go:117] "RemoveContainer" containerID="3d26f8527acbd6672331b9c2d4d418035dbc87e4072030173ebbbdfee80972e7" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.103570 5014 scope.go:117] "RemoveContainer" containerID="73de03346cc98ab1dfd17fcf7da1f55cc345dd690c4d5ca09632c5e9607a6de4" Feb 28 04:54:56 crc kubenswrapper[5014]: E0228 04:54:56.104657 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73de03346cc98ab1dfd17fcf7da1f55cc345dd690c4d5ca09632c5e9607a6de4\": container with ID starting with 73de03346cc98ab1dfd17fcf7da1f55cc345dd690c4d5ca09632c5e9607a6de4 not found: ID does not exist" containerID="73de03346cc98ab1dfd17fcf7da1f55cc345dd690c4d5ca09632c5e9607a6de4" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.104723 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73de03346cc98ab1dfd17fcf7da1f55cc345dd690c4d5ca09632c5e9607a6de4"} err="failed to get container status \"73de03346cc98ab1dfd17fcf7da1f55cc345dd690c4d5ca09632c5e9607a6de4\": rpc error: code = NotFound desc = could not find container \"73de03346cc98ab1dfd17fcf7da1f55cc345dd690c4d5ca09632c5e9607a6de4\": container with ID starting with 73de03346cc98ab1dfd17fcf7da1f55cc345dd690c4d5ca09632c5e9607a6de4 not found: ID does not exist" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.104817 5014 scope.go:117] "RemoveContainer" containerID="3d26f8527acbd6672331b9c2d4d418035dbc87e4072030173ebbbdfee80972e7" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.105909 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6f52a80-76bd-4c20-a619-2926065d7824-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.105948 5014 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6f52a80-76bd-4c20-a619-2926065d7824-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 04:54:56 crc kubenswrapper[5014]: E0228 04:54:56.106592 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d26f8527acbd6672331b9c2d4d418035dbc87e4072030173ebbbdfee80972e7\": container with ID starting with 3d26f8527acbd6672331b9c2d4d418035dbc87e4072030173ebbbdfee80972e7 not found: ID does not exist" containerID="3d26f8527acbd6672331b9c2d4d418035dbc87e4072030173ebbbdfee80972e7" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.106622 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d26f8527acbd6672331b9c2d4d418035dbc87e4072030173ebbbdfee80972e7"} err="failed to get container status \"3d26f8527acbd6672331b9c2d4d418035dbc87e4072030173ebbbdfee80972e7\": rpc error: code = NotFound desc = could not find container \"3d26f8527acbd6672331b9c2d4d418035dbc87e4072030173ebbbdfee80972e7\": container with ID starting with 3d26f8527acbd6672331b9c2d4d418035dbc87e4072030173ebbbdfee80972e7 not found: ID does not exist" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.183424 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4a0fe1f-df1b-44ad-bab1-71610e650357" path="/var/lib/kubelet/pods/f4a0fe1f-df1b-44ad-bab1-71610e650357/volumes" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.273352 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.289995 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.303920 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 04:54:56 crc kubenswrapper[5014]: E0228 04:54:56.304258 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6f52a80-76bd-4c20-a619-2926065d7824" containerName="glance-log" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.304270 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6f52a80-76bd-4c20-a619-2926065d7824" containerName="glance-log" Feb 28 04:54:56 crc kubenswrapper[5014]: E0228 04:54:56.304313 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6f52a80-76bd-4c20-a619-2926065d7824" containerName="glance-httpd" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.304321 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6f52a80-76bd-4c20-a619-2926065d7824" containerName="glance-httpd" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.304502 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6f52a80-76bd-4c20-a619-2926065d7824" containerName="glance-httpd" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.304531 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6f52a80-76bd-4c20-a619-2926065d7824" containerName="glance-log" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.305489 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.309085 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.309271 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.325665 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.416106 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b75610f5-509e-4ffa-a5fe-0eaa0dbcce98-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.416216 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b75610f5-509e-4ffa-a5fe-0eaa0dbcce98-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.416261 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b75610f5-509e-4ffa-a5fe-0eaa0dbcce98-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.416533 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b75610f5-509e-4ffa-a5fe-0eaa0dbcce98-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.416668 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b75610f5-509e-4ffa-a5fe-0eaa0dbcce98-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.416741 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.416891 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh9tp\" (UniqueName: \"kubernetes.io/projected/b75610f5-509e-4ffa-a5fe-0eaa0dbcce98-kube-api-access-kh9tp\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.416934 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b75610f5-509e-4ffa-a5fe-0eaa0dbcce98-logs\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.520328 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b75610f5-509e-4ffa-a5fe-0eaa0dbcce98-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.520446 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b75610f5-509e-4ffa-a5fe-0eaa0dbcce98-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.520492 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b75610f5-509e-4ffa-a5fe-0eaa0dbcce98-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.520519 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b75610f5-509e-4ffa-a5fe-0eaa0dbcce98-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.520554 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b75610f5-509e-4ffa-a5fe-0eaa0dbcce98-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.520620 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.520689 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh9tp\" (UniqueName: \"kubernetes.io/projected/b75610f5-509e-4ffa-a5fe-0eaa0dbcce98-kube-api-access-kh9tp\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.520726 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b75610f5-509e-4ffa-a5fe-0eaa0dbcce98-logs\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.521371 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b75610f5-509e-4ffa-a5fe-0eaa0dbcce98-logs\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.524143 5014 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.524780 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b75610f5-509e-4ffa-a5fe-0eaa0dbcce98-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.526553 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b75610f5-509e-4ffa-a5fe-0eaa0dbcce98-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.529736 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b75610f5-509e-4ffa-a5fe-0eaa0dbcce98-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.530632 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b75610f5-509e-4ffa-a5fe-0eaa0dbcce98-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.535227 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b75610f5-509e-4ffa-a5fe-0eaa0dbcce98-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.543932 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh9tp\" (UniqueName: \"kubernetes.io/projected/b75610f5-509e-4ffa-a5fe-0eaa0dbcce98-kube-api-access-kh9tp\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.564199 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98\") " pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.651176 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 28 04:54:56 crc kubenswrapper[5014]: I0228 04:54:56.960827 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f2c655a1-25af-4c06-9799-01a3a9fd5e52","Type":"ContainerStarted","Data":"f1192917a757f1ba0da77b5fadc304eb10ada9f9a5766f63f7b669914a14fdda"} Feb 28 04:54:57 crc kubenswrapper[5014]: I0228 04:54:57.195113 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 28 04:54:57 crc kubenswrapper[5014]: I0228 04:54:57.973673 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f2c655a1-25af-4c06-9799-01a3a9fd5e52","Type":"ContainerStarted","Data":"4d63ce1edfd7b5fbea1f036cb4cae3365eb5fbcda0ce2e26cc208e7fd20daf7c"} Feb 28 04:54:57 crc kubenswrapper[5014]: I0228 04:54:57.986789 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98","Type":"ContainerStarted","Data":"8ec780a8225a985d1cf56dcd2193d7569dfbf201d50847fffaae58d16baed27c"} Feb 28 04:54:57 crc kubenswrapper[5014]: I0228 04:54:57.986842 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98","Type":"ContainerStarted","Data":"67a65acad1065a44521b84ce56f6f509ec39a0f6dac71994fd8fa6da2479c0d1"} Feb 28 04:54:58 crc kubenswrapper[5014]: I0228 04:54:58.190718 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6f52a80-76bd-4c20-a619-2926065d7824" path="/var/lib/kubelet/pods/d6f52a80-76bd-4c20-a619-2926065d7824/volumes" Feb 28 04:54:58 crc kubenswrapper[5014]: I0228 04:54:58.998987 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b75610f5-509e-4ffa-a5fe-0eaa0dbcce98","Type":"ContainerStarted","Data":"c835b3b6a04c3b3f83b465b93bb30df61539e7416c14ac45a3e64e70eaab4204"} Feb 28 04:54:59 crc kubenswrapper[5014]: I0228 04:54:59.003232 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c","Type":"ContainerStarted","Data":"4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8"} Feb 28 04:54:59 crc kubenswrapper[5014]: I0228 04:54:59.003563 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" containerName="ceilometer-central-agent" containerID="cri-o://870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8" gracePeriod=30 Feb 28 04:54:59 crc kubenswrapper[5014]: I0228 04:54:59.003695 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 04:54:59 crc kubenswrapper[5014]: I0228 04:54:59.003739 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" containerName="proxy-httpd" containerID="cri-o://4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8" gracePeriod=30 Feb 28 04:54:59 crc kubenswrapper[5014]: I0228 04:54:59.003785 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" containerName="sg-core" containerID="cri-o://29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf" gracePeriod=30 Feb 28 04:54:59 crc kubenswrapper[5014]: I0228 04:54:59.003844 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" containerName="ceilometer-notification-agent" containerID="cri-o://ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33" gracePeriod=30 Feb 28 04:54:59 crc kubenswrapper[5014]: I0228 04:54:59.042880 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.042860687 podStartE2EDuration="3.042860687s" podCreationTimestamp="2026-02-28 04:54:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:54:59.026477207 +0000 UTC m=+1287.696603117" watchObservedRunningTime="2026-02-28 04:54:59.042860687 +0000 UTC m=+1287.712986597" Feb 28 04:54:59 crc kubenswrapper[5014]: I0228 04:54:59.044269 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.044254994 podStartE2EDuration="5.044254994s" podCreationTimestamp="2026-02-28 04:54:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:54:57.997701569 +0000 UTC m=+1286.667827479" watchObservedRunningTime="2026-02-28 04:54:59.044254994 +0000 UTC m=+1287.714380904" Feb 28 04:54:59 crc kubenswrapper[5014]: I0228 04:54:59.066503 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.307641022 podStartE2EDuration="15.06647903s" podCreationTimestamp="2026-02-28 04:54:44 +0000 UTC" firstStartedPulling="2026-02-28 04:54:45.007892873 +0000 UTC m=+1273.678018783" lastFinishedPulling="2026-02-28 04:54:57.766730881 +0000 UTC m=+1286.436856791" observedRunningTime="2026-02-28 04:54:59.061548617 +0000 UTC m=+1287.731674567" watchObservedRunningTime="2026-02-28 04:54:59.06647903 +0000 UTC m=+1287.736604940" Feb 28 04:54:59 crc kubenswrapper[5014]: I0228 04:54:59.951064 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.021417 5014 generic.go:334] "Generic (PLEG): container finished" podID="1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" containerID="4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8" exitCode=0 Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.021455 5014 generic.go:334] "Generic (PLEG): container finished" podID="1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" containerID="29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf" exitCode=2 Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.021457 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c","Type":"ContainerDied","Data":"4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8"} Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.021530 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c","Type":"ContainerDied","Data":"29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf"} Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.021468 5014 generic.go:334] "Generic (PLEG): container finished" podID="1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" containerID="ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33" exitCode=0 Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.021545 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c","Type":"ContainerDied","Data":"ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33"} Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.021561 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c","Type":"ContainerDied","Data":"870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8"} Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.021563 5014 scope.go:117] "RemoveContainer" containerID="4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.021549 5014 generic.go:334] "Generic (PLEG): container finished" podID="1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" containerID="870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8" exitCode=0 Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.021726 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c","Type":"ContainerDied","Data":"93f2a4223811e8fd9dcc7c361b080eba544ef913d91f77f37ae111ad0589f3d4"} Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.022920 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.043569 5014 scope.go:117] "RemoveContainer" containerID="29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.059800 5014 scope.go:117] "RemoveContainer" containerID="ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.093032 5014 scope.go:117] "RemoveContainer" containerID="870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.093501 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d7x9\" (UniqueName: \"kubernetes.io/projected/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-kube-api-access-2d7x9\") pod \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.093606 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-scripts\") pod \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.093679 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-run-httpd\") pod \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.093722 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-combined-ca-bundle\") pod \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.093826 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-config-data\") pod \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.093846 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-sg-core-conf-yaml\") pod \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.093904 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-log-httpd\") pod \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\" (UID: \"1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c\") " Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.095109 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" (UID: "1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.095761 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" (UID: "1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.100180 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-kube-api-access-2d7x9" (OuterVolumeSpecName: "kube-api-access-2d7x9") pod "1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" (UID: "1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c"). InnerVolumeSpecName "kube-api-access-2d7x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.100368 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-scripts" (OuterVolumeSpecName: "scripts") pod "1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" (UID: "1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.114926 5014 scope.go:117] "RemoveContainer" containerID="4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8" Feb 28 04:55:00 crc kubenswrapper[5014]: E0228 04:55:00.115462 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8\": container with ID starting with 4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8 not found: ID does not exist" containerID="4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.115519 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8"} err="failed to get container status \"4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8\": rpc error: code = NotFound desc = could not find container \"4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8\": container with ID starting with 4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8 not found: ID does not exist" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.115548 5014 scope.go:117] "RemoveContainer" containerID="29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf" Feb 28 04:55:00 crc kubenswrapper[5014]: E0228 04:55:00.115890 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf\": container with ID starting with 29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf not found: ID does not exist" containerID="29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.115928 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf"} err="failed to get container status \"29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf\": rpc error: code = NotFound desc = could not find container \"29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf\": container with ID starting with 29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf not found: ID does not exist" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.115955 5014 scope.go:117] "RemoveContainer" containerID="ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33" Feb 28 04:55:00 crc kubenswrapper[5014]: E0228 04:55:00.116240 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33\": container with ID starting with ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33 not found: ID does not exist" containerID="ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.116287 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33"} err="failed to get container status \"ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33\": rpc error: code = NotFound desc = could not find container \"ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33\": container with ID starting with ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33 not found: ID does not exist" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.116316 5014 scope.go:117] "RemoveContainer" containerID="870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8" Feb 28 04:55:00 crc kubenswrapper[5014]: E0228 04:55:00.116586 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8\": container with ID starting with 870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8 not found: ID does not exist" containerID="870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.116629 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8"} err="failed to get container status \"870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8\": rpc error: code = NotFound desc = could not find container \"870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8\": container with ID starting with 870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8 not found: ID does not exist" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.116668 5014 scope.go:117] "RemoveContainer" containerID="4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.118708 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8"} err="failed to get container status \"4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8\": rpc error: code = NotFound desc = could not find container \"4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8\": container with ID starting with 4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8 not found: ID does not exist" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.118738 5014 scope.go:117] "RemoveContainer" containerID="29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.119065 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf"} err="failed to get container status \"29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf\": rpc error: code = NotFound desc = could not find container \"29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf\": container with ID starting with 29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf not found: ID does not exist" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.119103 5014 scope.go:117] "RemoveContainer" containerID="ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.119304 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33"} err="failed to get container status \"ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33\": rpc error: code = NotFound desc = could not find container \"ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33\": container with ID starting with ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33 not found: ID does not exist" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.119322 5014 scope.go:117] "RemoveContainer" containerID="870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.119578 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8"} err="failed to get container status \"870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8\": rpc error: code = NotFound desc = could not find container \"870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8\": container with ID starting with 870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8 not found: ID does not exist" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.119624 5014 scope.go:117] "RemoveContainer" containerID="4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.119867 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8"} err="failed to get container status \"4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8\": rpc error: code = NotFound desc = could not find container \"4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8\": container with ID starting with 4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8 not found: ID does not exist" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.119912 5014 scope.go:117] "RemoveContainer" containerID="29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.120134 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf"} err="failed to get container status \"29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf\": rpc error: code = NotFound desc = could not find container \"29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf\": container with ID starting with 29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf not found: ID does not exist" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.120201 5014 scope.go:117] "RemoveContainer" containerID="ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.120747 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33"} err="failed to get container status \"ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33\": rpc error: code = NotFound desc = could not find container \"ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33\": container with ID starting with ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33 not found: ID does not exist" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.120770 5014 scope.go:117] "RemoveContainer" containerID="870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.121008 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8"} err="failed to get container status \"870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8\": rpc error: code = NotFound desc = could not find container \"870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8\": container with ID starting with 870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8 not found: ID does not exist" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.121050 5014 scope.go:117] "RemoveContainer" containerID="4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.121700 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8"} err="failed to get container status \"4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8\": rpc error: code = NotFound desc = could not find container \"4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8\": container with ID starting with 4f769ba54d9f66c220a25d7d4fcf0a5a2a4ac39df42c44cbe4cb2d9c16737ca8 not found: ID does not exist" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.121730 5014 scope.go:117] "RemoveContainer" containerID="29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.121987 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf"} err="failed to get container status \"29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf\": rpc error: code = NotFound desc = could not find container \"29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf\": container with ID starting with 29e92f2b484d5941acf3b08ddc3643eb1abf52ba48a98000cccf6844363cb7cf not found: ID does not exist" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.122020 5014 scope.go:117] "RemoveContainer" containerID="ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.122336 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33"} err="failed to get container status \"ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33\": rpc error: code = NotFound desc = could not find container \"ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33\": container with ID starting with ff0b3e22de6cde25994ed66c5115e9017e184ff0e041f656e1b6a03d4577ff33 not found: ID does not exist" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.122360 5014 scope.go:117] "RemoveContainer" containerID="870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.122547 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8"} err="failed to get container status \"870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8\": rpc error: code = NotFound desc = could not find container \"870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8\": container with ID starting with 870a6292ca4c75aa5f7d0dfa053c34f950551c774a9d218a8d6fa678d7dbede8 not found: ID does not exist" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.127942 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" (UID: "1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.202153 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" (UID: "1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.202670 5014 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.202699 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.202722 5014 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.202734 5014 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.202747 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d7x9\" (UniqueName: \"kubernetes.io/projected/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-kube-api-access-2d7x9\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.202761 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.226369 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-config-data" (OuterVolumeSpecName: "config-data") pod "1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" (UID: "1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.304401 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.360581 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.371980 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.396925 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:55:00 crc kubenswrapper[5014]: E0228 04:55:00.397432 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" containerName="proxy-httpd" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.397457 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" containerName="proxy-httpd" Feb 28 04:55:00 crc kubenswrapper[5014]: E0228 04:55:00.397494 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" containerName="ceilometer-notification-agent" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.397504 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" containerName="ceilometer-notification-agent" Feb 28 04:55:00 crc kubenswrapper[5014]: E0228 04:55:00.397523 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" containerName="sg-core" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.397532 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" containerName="sg-core" Feb 28 04:55:00 crc kubenswrapper[5014]: E0228 04:55:00.397547 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" containerName="ceilometer-central-agent" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.397556 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" containerName="ceilometer-central-agent" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.397767 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" containerName="sg-core" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.397787 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" containerName="proxy-httpd" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.397836 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" containerName="ceilometer-central-agent" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.397849 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" containerName="ceilometer-notification-agent" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.401238 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.405094 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.405992 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.419189 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.507681 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b052d62-5806-4883-877c-e88c7d7deedc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " pod="openstack/ceilometer-0" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.507736 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b052d62-5806-4883-877c-e88c7d7deedc-log-httpd\") pod \"ceilometer-0\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " pod="openstack/ceilometer-0" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.507764 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b052d62-5806-4883-877c-e88c7d7deedc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " pod="openstack/ceilometer-0" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.507781 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b052d62-5806-4883-877c-e88c7d7deedc-config-data\") pod \"ceilometer-0\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " pod="openstack/ceilometer-0" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.507796 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b052d62-5806-4883-877c-e88c7d7deedc-run-httpd\") pod \"ceilometer-0\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " pod="openstack/ceilometer-0" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.507842 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b052d62-5806-4883-877c-e88c7d7deedc-scripts\") pod \"ceilometer-0\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " pod="openstack/ceilometer-0" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.507958 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqskm\" (UniqueName: \"kubernetes.io/projected/2b052d62-5806-4883-877c-e88c7d7deedc-kube-api-access-sqskm\") pod \"ceilometer-0\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " pod="openstack/ceilometer-0" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.609978 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b052d62-5806-4883-877c-e88c7d7deedc-scripts\") pod \"ceilometer-0\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " pod="openstack/ceilometer-0" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.610229 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqskm\" (UniqueName: \"kubernetes.io/projected/2b052d62-5806-4883-877c-e88c7d7deedc-kube-api-access-sqskm\") pod \"ceilometer-0\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " pod="openstack/ceilometer-0" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.610355 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b052d62-5806-4883-877c-e88c7d7deedc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " pod="openstack/ceilometer-0" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.610460 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b052d62-5806-4883-877c-e88c7d7deedc-log-httpd\") pod \"ceilometer-0\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " pod="openstack/ceilometer-0" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.610539 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b052d62-5806-4883-877c-e88c7d7deedc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " pod="openstack/ceilometer-0" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.610595 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b052d62-5806-4883-877c-e88c7d7deedc-config-data\") pod \"ceilometer-0\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " pod="openstack/ceilometer-0" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.610636 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b052d62-5806-4883-877c-e88c7d7deedc-run-httpd\") pod \"ceilometer-0\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " pod="openstack/ceilometer-0" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.611121 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b052d62-5806-4883-877c-e88c7d7deedc-log-httpd\") pod \"ceilometer-0\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " pod="openstack/ceilometer-0" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.612051 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b052d62-5806-4883-877c-e88c7d7deedc-run-httpd\") pod \"ceilometer-0\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " pod="openstack/ceilometer-0" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.616434 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b052d62-5806-4883-877c-e88c7d7deedc-scripts\") pod \"ceilometer-0\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " pod="openstack/ceilometer-0" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.618697 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b052d62-5806-4883-877c-e88c7d7deedc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " pod="openstack/ceilometer-0" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.620183 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b052d62-5806-4883-877c-e88c7d7deedc-config-data\") pod \"ceilometer-0\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " pod="openstack/ceilometer-0" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.620649 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b052d62-5806-4883-877c-e88c7d7deedc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " pod="openstack/ceilometer-0" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.642273 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqskm\" (UniqueName: \"kubernetes.io/projected/2b052d62-5806-4883-877c-e88c7d7deedc-kube-api-access-sqskm\") pod \"ceilometer-0\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " pod="openstack/ceilometer-0" Feb 28 04:55:00 crc kubenswrapper[5014]: I0228 04:55:00.736673 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:55:01 crc kubenswrapper[5014]: I0228 04:55:01.170210 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:55:01 crc kubenswrapper[5014]: W0228 04:55:01.171320 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b052d62_5806_4883_877c_e88c7d7deedc.slice/crio-343caa5cd87c4032289187c6847d893f8f7698cc8096c49f117ed49d91159e58 WatchSource:0}: Error finding container 343caa5cd87c4032289187c6847d893f8f7698cc8096c49f117ed49d91159e58: Status 404 returned error can't find the container with id 343caa5cd87c4032289187c6847d893f8f7698cc8096c49f117ed49d91159e58 Feb 28 04:55:02 crc kubenswrapper[5014]: I0228 04:55:02.047777 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b052d62-5806-4883-877c-e88c7d7deedc","Type":"ContainerStarted","Data":"e3b91abbdb2318f520d6eacd9caee8807a09e06b5a007812abe2070639d1606b"} Feb 28 04:55:02 crc kubenswrapper[5014]: I0228 04:55:02.048136 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b052d62-5806-4883-877c-e88c7d7deedc","Type":"ContainerStarted","Data":"343caa5cd87c4032289187c6847d893f8f7698cc8096c49f117ed49d91159e58"} Feb 28 04:55:02 crc kubenswrapper[5014]: I0228 04:55:02.216855 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c" path="/var/lib/kubelet/pods/1b7bb3f3-5605-4dc0-a2bf-c7cf3550fa0c/volumes" Feb 28 04:55:03 crc kubenswrapper[5014]: I0228 04:55:03.058941 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b052d62-5806-4883-877c-e88c7d7deedc","Type":"ContainerStarted","Data":"4c0f10aeb16b7aaa45988aafa614ecf913757afc8b3419dd2de0520a9c7fbfde"} Feb 28 04:55:04 crc kubenswrapper[5014]: I0228 04:55:04.071094 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b052d62-5806-4883-877c-e88c7d7deedc","Type":"ContainerStarted","Data":"67a474e26017811b8b2f144f6ffc1edc6f29d46325864ccbd6f50d68690545c4"} Feb 28 04:55:04 crc kubenswrapper[5014]: I0228 04:55:04.579722 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 28 04:55:04 crc kubenswrapper[5014]: I0228 04:55:04.579896 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 28 04:55:04 crc kubenswrapper[5014]: I0228 04:55:04.610013 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 28 04:55:04 crc kubenswrapper[5014]: I0228 04:55:04.645888 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 28 04:55:05 crc kubenswrapper[5014]: I0228 04:55:05.080300 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 28 04:55:05 crc kubenswrapper[5014]: I0228 04:55:05.080364 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 28 04:55:06 crc kubenswrapper[5014]: I0228 04:55:06.094823 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b052d62-5806-4883-877c-e88c7d7deedc","Type":"ContainerStarted","Data":"33575773a10e00266543b92b63d0255e2184c0b85e63a9f81012872b426c30f7"} Feb 28 04:55:06 crc kubenswrapper[5014]: I0228 04:55:06.136219 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.072430077 podStartE2EDuration="6.136192359s" podCreationTimestamp="2026-02-28 04:55:00 +0000 UTC" firstStartedPulling="2026-02-28 04:55:01.174415848 +0000 UTC m=+1289.844541778" lastFinishedPulling="2026-02-28 04:55:05.23817814 +0000 UTC m=+1293.908304060" observedRunningTime="2026-02-28 04:55:06.115352249 +0000 UTC m=+1294.785478179" watchObservedRunningTime="2026-02-28 04:55:06.136192359 +0000 UTC m=+1294.806318289" Feb 28 04:55:06 crc kubenswrapper[5014]: I0228 04:55:06.651505 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 28 04:55:06 crc kubenswrapper[5014]: I0228 04:55:06.651599 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 28 04:55:06 crc kubenswrapper[5014]: I0228 04:55:06.702896 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 28 04:55:06 crc kubenswrapper[5014]: I0228 04:55:06.716646 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 28 04:55:06 crc kubenswrapper[5014]: I0228 04:55:06.923044 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 28 04:55:06 crc kubenswrapper[5014]: I0228 04:55:06.924714 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 28 04:55:07 crc kubenswrapper[5014]: I0228 04:55:07.105992 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 28 04:55:07 crc kubenswrapper[5014]: I0228 04:55:07.106299 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 28 04:55:07 crc kubenswrapper[5014]: I0228 04:55:07.106791 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 04:55:08 crc kubenswrapper[5014]: I0228 04:55:08.116484 5014 generic.go:334] "Generic (PLEG): container finished" podID="69e68ab2-dae2-4ebe-9820-c945a9897363" containerID="8a448f0ca7e013cbc317b3a7ab992f0d25fccd158f4c18c910f802df198f4a0f" exitCode=0 Feb 28 04:55:08 crc kubenswrapper[5014]: I0228 04:55:08.116652 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-txmbr" event={"ID":"69e68ab2-dae2-4ebe-9820-c945a9897363","Type":"ContainerDied","Data":"8a448f0ca7e013cbc317b3a7ab992f0d25fccd158f4c18c910f802df198f4a0f"} Feb 28 04:55:08 crc kubenswrapper[5014]: I0228 04:55:08.883564 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 28 04:55:08 crc kubenswrapper[5014]: I0228 04:55:08.884393 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 28 04:55:09 crc kubenswrapper[5014]: I0228 04:55:09.485891 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-txmbr" Feb 28 04:55:09 crc kubenswrapper[5014]: I0228 04:55:09.622472 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69e68ab2-dae2-4ebe-9820-c945a9897363-scripts\") pod \"69e68ab2-dae2-4ebe-9820-c945a9897363\" (UID: \"69e68ab2-dae2-4ebe-9820-c945a9897363\") " Feb 28 04:55:09 crc kubenswrapper[5014]: I0228 04:55:09.622624 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69e68ab2-dae2-4ebe-9820-c945a9897363-combined-ca-bundle\") pod \"69e68ab2-dae2-4ebe-9820-c945a9897363\" (UID: \"69e68ab2-dae2-4ebe-9820-c945a9897363\") " Feb 28 04:55:09 crc kubenswrapper[5014]: I0228 04:55:09.622699 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69e68ab2-dae2-4ebe-9820-c945a9897363-config-data\") pod \"69e68ab2-dae2-4ebe-9820-c945a9897363\" (UID: \"69e68ab2-dae2-4ebe-9820-c945a9897363\") " Feb 28 04:55:09 crc kubenswrapper[5014]: I0228 04:55:09.622745 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lng95\" (UniqueName: \"kubernetes.io/projected/69e68ab2-dae2-4ebe-9820-c945a9897363-kube-api-access-lng95\") pod \"69e68ab2-dae2-4ebe-9820-c945a9897363\" (UID: \"69e68ab2-dae2-4ebe-9820-c945a9897363\") " Feb 28 04:55:09 crc kubenswrapper[5014]: I0228 04:55:09.629262 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69e68ab2-dae2-4ebe-9820-c945a9897363-kube-api-access-lng95" (OuterVolumeSpecName: "kube-api-access-lng95") pod "69e68ab2-dae2-4ebe-9820-c945a9897363" (UID: "69e68ab2-dae2-4ebe-9820-c945a9897363"). InnerVolumeSpecName "kube-api-access-lng95". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:55:09 crc kubenswrapper[5014]: I0228 04:55:09.629406 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69e68ab2-dae2-4ebe-9820-c945a9897363-scripts" (OuterVolumeSpecName: "scripts") pod "69e68ab2-dae2-4ebe-9820-c945a9897363" (UID: "69e68ab2-dae2-4ebe-9820-c945a9897363"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:09 crc kubenswrapper[5014]: I0228 04:55:09.654059 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69e68ab2-dae2-4ebe-9820-c945a9897363-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "69e68ab2-dae2-4ebe-9820-c945a9897363" (UID: "69e68ab2-dae2-4ebe-9820-c945a9897363"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:09 crc kubenswrapper[5014]: I0228 04:55:09.658137 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69e68ab2-dae2-4ebe-9820-c945a9897363-config-data" (OuterVolumeSpecName: "config-data") pod "69e68ab2-dae2-4ebe-9820-c945a9897363" (UID: "69e68ab2-dae2-4ebe-9820-c945a9897363"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:09 crc kubenswrapper[5014]: I0228 04:55:09.724926 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/69e68ab2-dae2-4ebe-9820-c945a9897363-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:09 crc kubenswrapper[5014]: I0228 04:55:09.724965 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/69e68ab2-dae2-4ebe-9820-c945a9897363-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:09 crc kubenswrapper[5014]: I0228 04:55:09.724976 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lng95\" (UniqueName: \"kubernetes.io/projected/69e68ab2-dae2-4ebe-9820-c945a9897363-kube-api-access-lng95\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:09 crc kubenswrapper[5014]: I0228 04:55:09.724988 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/69e68ab2-dae2-4ebe-9820-c945a9897363-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:10 crc kubenswrapper[5014]: I0228 04:55:10.131338 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-txmbr" event={"ID":"69e68ab2-dae2-4ebe-9820-c945a9897363","Type":"ContainerDied","Data":"765cd64a1d3547711b4256d3080d253a4a12bd81e0c49349a66d997dfee7d8be"} Feb 28 04:55:10 crc kubenswrapper[5014]: I0228 04:55:10.131367 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-txmbr" Feb 28 04:55:10 crc kubenswrapper[5014]: I0228 04:55:10.131378 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="765cd64a1d3547711b4256d3080d253a4a12bd81e0c49349a66d997dfee7d8be" Feb 28 04:55:10 crc kubenswrapper[5014]: I0228 04:55:10.240344 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 28 04:55:10 crc kubenswrapper[5014]: E0228 04:55:10.240678 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69e68ab2-dae2-4ebe-9820-c945a9897363" containerName="nova-cell0-conductor-db-sync" Feb 28 04:55:10 crc kubenswrapper[5014]: I0228 04:55:10.240693 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="69e68ab2-dae2-4ebe-9820-c945a9897363" containerName="nova-cell0-conductor-db-sync" Feb 28 04:55:10 crc kubenswrapper[5014]: I0228 04:55:10.240875 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="69e68ab2-dae2-4ebe-9820-c945a9897363" containerName="nova-cell0-conductor-db-sync" Feb 28 04:55:10 crc kubenswrapper[5014]: I0228 04:55:10.241411 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 28 04:55:10 crc kubenswrapper[5014]: I0228 04:55:10.243415 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-59rrm" Feb 28 04:55:10 crc kubenswrapper[5014]: I0228 04:55:10.243988 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 28 04:55:10 crc kubenswrapper[5014]: I0228 04:55:10.268974 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 28 04:55:10 crc kubenswrapper[5014]: I0228 04:55:10.334012 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e76b3d9a-ffbe-4d58-9264-1b4ca1528410-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e76b3d9a-ffbe-4d58-9264-1b4ca1528410\") " pod="openstack/nova-cell0-conductor-0" Feb 28 04:55:10 crc kubenswrapper[5014]: I0228 04:55:10.334122 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e76b3d9a-ffbe-4d58-9264-1b4ca1528410-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e76b3d9a-ffbe-4d58-9264-1b4ca1528410\") " pod="openstack/nova-cell0-conductor-0" Feb 28 04:55:10 crc kubenswrapper[5014]: I0228 04:55:10.334169 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpjd9\" (UniqueName: \"kubernetes.io/projected/e76b3d9a-ffbe-4d58-9264-1b4ca1528410-kube-api-access-qpjd9\") pod \"nova-cell0-conductor-0\" (UID: \"e76b3d9a-ffbe-4d58-9264-1b4ca1528410\") " pod="openstack/nova-cell0-conductor-0" Feb 28 04:55:10 crc kubenswrapper[5014]: I0228 04:55:10.435949 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e76b3d9a-ffbe-4d58-9264-1b4ca1528410-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e76b3d9a-ffbe-4d58-9264-1b4ca1528410\") " pod="openstack/nova-cell0-conductor-0" Feb 28 04:55:10 crc kubenswrapper[5014]: I0228 04:55:10.436105 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e76b3d9a-ffbe-4d58-9264-1b4ca1528410-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e76b3d9a-ffbe-4d58-9264-1b4ca1528410\") " pod="openstack/nova-cell0-conductor-0" Feb 28 04:55:10 crc kubenswrapper[5014]: I0228 04:55:10.436167 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpjd9\" (UniqueName: \"kubernetes.io/projected/e76b3d9a-ffbe-4d58-9264-1b4ca1528410-kube-api-access-qpjd9\") pod \"nova-cell0-conductor-0\" (UID: \"e76b3d9a-ffbe-4d58-9264-1b4ca1528410\") " pod="openstack/nova-cell0-conductor-0" Feb 28 04:55:10 crc kubenswrapper[5014]: I0228 04:55:10.439902 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e76b3d9a-ffbe-4d58-9264-1b4ca1528410-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"e76b3d9a-ffbe-4d58-9264-1b4ca1528410\") " pod="openstack/nova-cell0-conductor-0" Feb 28 04:55:10 crc kubenswrapper[5014]: I0228 04:55:10.440463 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e76b3d9a-ffbe-4d58-9264-1b4ca1528410-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"e76b3d9a-ffbe-4d58-9264-1b4ca1528410\") " pod="openstack/nova-cell0-conductor-0" Feb 28 04:55:10 crc kubenswrapper[5014]: I0228 04:55:10.452476 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpjd9\" (UniqueName: \"kubernetes.io/projected/e76b3d9a-ffbe-4d58-9264-1b4ca1528410-kube-api-access-qpjd9\") pod \"nova-cell0-conductor-0\" (UID: \"e76b3d9a-ffbe-4d58-9264-1b4ca1528410\") " pod="openstack/nova-cell0-conductor-0" Feb 28 04:55:10 crc kubenswrapper[5014]: I0228 04:55:10.605875 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 28 04:55:11 crc kubenswrapper[5014]: I0228 04:55:11.068848 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 28 04:55:11 crc kubenswrapper[5014]: I0228 04:55:11.142093 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e76b3d9a-ffbe-4d58-9264-1b4ca1528410","Type":"ContainerStarted","Data":"3261a72a1fe38b0b06bdee564d1b91aa0b6c35c5c6416643011e9fe43461fea3"} Feb 28 04:55:11 crc kubenswrapper[5014]: I0228 04:55:11.705487 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:55:11 crc kubenswrapper[5014]: I0228 04:55:11.706264 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b052d62-5806-4883-877c-e88c7d7deedc" containerName="proxy-httpd" containerID="cri-o://33575773a10e00266543b92b63d0255e2184c0b85e63a9f81012872b426c30f7" gracePeriod=30 Feb 28 04:55:11 crc kubenswrapper[5014]: I0228 04:55:11.706337 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b052d62-5806-4883-877c-e88c7d7deedc" containerName="sg-core" containerID="cri-o://67a474e26017811b8b2f144f6ffc1edc6f29d46325864ccbd6f50d68690545c4" gracePeriod=30 Feb 28 04:55:11 crc kubenswrapper[5014]: I0228 04:55:11.706537 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b052d62-5806-4883-877c-e88c7d7deedc" containerName="ceilometer-notification-agent" containerID="cri-o://4c0f10aeb16b7aaa45988aafa614ecf913757afc8b3419dd2de0520a9c7fbfde" gracePeriod=30 Feb 28 04:55:11 crc kubenswrapper[5014]: I0228 04:55:11.706548 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2b052d62-5806-4883-877c-e88c7d7deedc" containerName="ceilometer-central-agent" containerID="cri-o://e3b91abbdb2318f520d6eacd9caee8807a09e06b5a007812abe2070639d1606b" gracePeriod=30 Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.151781 5014 generic.go:334] "Generic (PLEG): container finished" podID="2b052d62-5806-4883-877c-e88c7d7deedc" containerID="33575773a10e00266543b92b63d0255e2184c0b85e63a9f81012872b426c30f7" exitCode=0 Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.151851 5014 generic.go:334] "Generic (PLEG): container finished" podID="2b052d62-5806-4883-877c-e88c7d7deedc" containerID="67a474e26017811b8b2f144f6ffc1edc6f29d46325864ccbd6f50d68690545c4" exitCode=2 Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.151864 5014 generic.go:334] "Generic (PLEG): container finished" podID="2b052d62-5806-4883-877c-e88c7d7deedc" containerID="e3b91abbdb2318f520d6eacd9caee8807a09e06b5a007812abe2070639d1606b" exitCode=0 Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.152146 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b052d62-5806-4883-877c-e88c7d7deedc","Type":"ContainerDied","Data":"33575773a10e00266543b92b63d0255e2184c0b85e63a9f81012872b426c30f7"} Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.152240 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b052d62-5806-4883-877c-e88c7d7deedc","Type":"ContainerDied","Data":"67a474e26017811b8b2f144f6ffc1edc6f29d46325864ccbd6f50d68690545c4"} Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.152311 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b052d62-5806-4883-877c-e88c7d7deedc","Type":"ContainerDied","Data":"e3b91abbdb2318f520d6eacd9caee8807a09e06b5a007812abe2070639d1606b"} Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.153774 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"e76b3d9a-ffbe-4d58-9264-1b4ca1528410","Type":"ContainerStarted","Data":"99fbbf70b6a12d5511bdc7fc9a141cca136d584f5b9d3b39ca2ee48bc3e77aa6"} Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.154038 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.186279 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.186253795 podStartE2EDuration="2.186253795s" podCreationTimestamp="2026-02-28 04:55:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:55:12.179981266 +0000 UTC m=+1300.850107226" watchObservedRunningTime="2026-02-28 04:55:12.186253795 +0000 UTC m=+1300.856379705" Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.714780 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.785932 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b052d62-5806-4883-877c-e88c7d7deedc-run-httpd\") pod \"2b052d62-5806-4883-877c-e88c7d7deedc\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.786099 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b052d62-5806-4883-877c-e88c7d7deedc-combined-ca-bundle\") pod \"2b052d62-5806-4883-877c-e88c7d7deedc\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.786199 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b052d62-5806-4883-877c-e88c7d7deedc-config-data\") pod \"2b052d62-5806-4883-877c-e88c7d7deedc\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.786306 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b052d62-5806-4883-877c-e88c7d7deedc-sg-core-conf-yaml\") pod \"2b052d62-5806-4883-877c-e88c7d7deedc\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.786339 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b052d62-5806-4883-877c-e88c7d7deedc-log-httpd\") pod \"2b052d62-5806-4883-877c-e88c7d7deedc\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.786396 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b052d62-5806-4883-877c-e88c7d7deedc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2b052d62-5806-4883-877c-e88c7d7deedc" (UID: "2b052d62-5806-4883-877c-e88c7d7deedc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.786426 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqskm\" (UniqueName: \"kubernetes.io/projected/2b052d62-5806-4883-877c-e88c7d7deedc-kube-api-access-sqskm\") pod \"2b052d62-5806-4883-877c-e88c7d7deedc\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.786506 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b052d62-5806-4883-877c-e88c7d7deedc-scripts\") pod \"2b052d62-5806-4883-877c-e88c7d7deedc\" (UID: \"2b052d62-5806-4883-877c-e88c7d7deedc\") " Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.786826 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2b052d62-5806-4883-877c-e88c7d7deedc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2b052d62-5806-4883-877c-e88c7d7deedc" (UID: "2b052d62-5806-4883-877c-e88c7d7deedc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.787453 5014 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b052d62-5806-4883-877c-e88c7d7deedc-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.787487 5014 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2b052d62-5806-4883-877c-e88c7d7deedc-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.792064 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b052d62-5806-4883-877c-e88c7d7deedc-scripts" (OuterVolumeSpecName: "scripts") pod "2b052d62-5806-4883-877c-e88c7d7deedc" (UID: "2b052d62-5806-4883-877c-e88c7d7deedc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.795902 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b052d62-5806-4883-877c-e88c7d7deedc-kube-api-access-sqskm" (OuterVolumeSpecName: "kube-api-access-sqskm") pod "2b052d62-5806-4883-877c-e88c7d7deedc" (UID: "2b052d62-5806-4883-877c-e88c7d7deedc"). InnerVolumeSpecName "kube-api-access-sqskm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.828770 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b052d62-5806-4883-877c-e88c7d7deedc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2b052d62-5806-4883-877c-e88c7d7deedc" (UID: "2b052d62-5806-4883-877c-e88c7d7deedc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.889489 5014 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2b052d62-5806-4883-877c-e88c7d7deedc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.889530 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqskm\" (UniqueName: \"kubernetes.io/projected/2b052d62-5806-4883-877c-e88c7d7deedc-kube-api-access-sqskm\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.889545 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2b052d62-5806-4883-877c-e88c7d7deedc-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.894785 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b052d62-5806-4883-877c-e88c7d7deedc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b052d62-5806-4883-877c-e88c7d7deedc" (UID: "2b052d62-5806-4883-877c-e88c7d7deedc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.902375 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b052d62-5806-4883-877c-e88c7d7deedc-config-data" (OuterVolumeSpecName: "config-data") pod "2b052d62-5806-4883-877c-e88c7d7deedc" (UID: "2b052d62-5806-4883-877c-e88c7d7deedc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.991119 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b052d62-5806-4883-877c-e88c7d7deedc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:12 crc kubenswrapper[5014]: I0228 04:55:12.991158 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b052d62-5806-4883-877c-e88c7d7deedc-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.163641 5014 generic.go:334] "Generic (PLEG): container finished" podID="2b052d62-5806-4883-877c-e88c7d7deedc" containerID="4c0f10aeb16b7aaa45988aafa614ecf913757afc8b3419dd2de0520a9c7fbfde" exitCode=0 Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.163789 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.164437 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b052d62-5806-4883-877c-e88c7d7deedc","Type":"ContainerDied","Data":"4c0f10aeb16b7aaa45988aafa614ecf913757afc8b3419dd2de0520a9c7fbfde"} Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.164464 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2b052d62-5806-4883-877c-e88c7d7deedc","Type":"ContainerDied","Data":"343caa5cd87c4032289187c6847d893f8f7698cc8096c49f117ed49d91159e58"} Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.164480 5014 scope.go:117] "RemoveContainer" containerID="33575773a10e00266543b92b63d0255e2184c0b85e63a9f81012872b426c30f7" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.219936 5014 scope.go:117] "RemoveContainer" containerID="67a474e26017811b8b2f144f6ffc1edc6f29d46325864ccbd6f50d68690545c4" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.230517 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.248007 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.252707 5014 scope.go:117] "RemoveContainer" containerID="4c0f10aeb16b7aaa45988aafa614ecf913757afc8b3419dd2de0520a9c7fbfde" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.268243 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:55:13 crc kubenswrapper[5014]: E0228 04:55:13.268726 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b052d62-5806-4883-877c-e88c7d7deedc" containerName="ceilometer-notification-agent" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.268745 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b052d62-5806-4883-877c-e88c7d7deedc" containerName="ceilometer-notification-agent" Feb 28 04:55:13 crc kubenswrapper[5014]: E0228 04:55:13.268766 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b052d62-5806-4883-877c-e88c7d7deedc" containerName="proxy-httpd" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.268774 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b052d62-5806-4883-877c-e88c7d7deedc" containerName="proxy-httpd" Feb 28 04:55:13 crc kubenswrapper[5014]: E0228 04:55:13.268829 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b052d62-5806-4883-877c-e88c7d7deedc" containerName="ceilometer-central-agent" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.268840 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b052d62-5806-4883-877c-e88c7d7deedc" containerName="ceilometer-central-agent" Feb 28 04:55:13 crc kubenswrapper[5014]: E0228 04:55:13.268857 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b052d62-5806-4883-877c-e88c7d7deedc" containerName="sg-core" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.268865 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b052d62-5806-4883-877c-e88c7d7deedc" containerName="sg-core" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.269064 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b052d62-5806-4883-877c-e88c7d7deedc" containerName="sg-core" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.269081 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b052d62-5806-4883-877c-e88c7d7deedc" containerName="ceilometer-notification-agent" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.269099 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b052d62-5806-4883-877c-e88c7d7deedc" containerName="ceilometer-central-agent" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.269128 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b052d62-5806-4883-877c-e88c7d7deedc" containerName="proxy-httpd" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.271647 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.277202 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.277472 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.286479 5014 scope.go:117] "RemoveContainer" containerID="e3b91abbdb2318f520d6eacd9caee8807a09e06b5a007812abe2070639d1606b" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.294036 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.299279 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66f74e9e-e211-467c-a1a4-93a01ff93dd1-run-httpd\") pod \"ceilometer-0\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " pod="openstack/ceilometer-0" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.299331 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66f74e9e-e211-467c-a1a4-93a01ff93dd1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " pod="openstack/ceilometer-0" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.299381 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66f74e9e-e211-467c-a1a4-93a01ff93dd1-log-httpd\") pod \"ceilometer-0\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " pod="openstack/ceilometer-0" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.299522 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66f74e9e-e211-467c-a1a4-93a01ff93dd1-config-data\") pod \"ceilometer-0\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " pod="openstack/ceilometer-0" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.299547 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66f74e9e-e211-467c-a1a4-93a01ff93dd1-scripts\") pod \"ceilometer-0\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " pod="openstack/ceilometer-0" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.299593 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d45n9\" (UniqueName: \"kubernetes.io/projected/66f74e9e-e211-467c-a1a4-93a01ff93dd1-kube-api-access-d45n9\") pod \"ceilometer-0\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " pod="openstack/ceilometer-0" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.299618 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66f74e9e-e211-467c-a1a4-93a01ff93dd1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " pod="openstack/ceilometer-0" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.323056 5014 scope.go:117] "RemoveContainer" containerID="33575773a10e00266543b92b63d0255e2184c0b85e63a9f81012872b426c30f7" Feb 28 04:55:13 crc kubenswrapper[5014]: E0228 04:55:13.323561 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33575773a10e00266543b92b63d0255e2184c0b85e63a9f81012872b426c30f7\": container with ID starting with 33575773a10e00266543b92b63d0255e2184c0b85e63a9f81012872b426c30f7 not found: ID does not exist" containerID="33575773a10e00266543b92b63d0255e2184c0b85e63a9f81012872b426c30f7" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.323612 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33575773a10e00266543b92b63d0255e2184c0b85e63a9f81012872b426c30f7"} err="failed to get container status \"33575773a10e00266543b92b63d0255e2184c0b85e63a9f81012872b426c30f7\": rpc error: code = NotFound desc = could not find container \"33575773a10e00266543b92b63d0255e2184c0b85e63a9f81012872b426c30f7\": container with ID starting with 33575773a10e00266543b92b63d0255e2184c0b85e63a9f81012872b426c30f7 not found: ID does not exist" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.323632 5014 scope.go:117] "RemoveContainer" containerID="67a474e26017811b8b2f144f6ffc1edc6f29d46325864ccbd6f50d68690545c4" Feb 28 04:55:13 crc kubenswrapper[5014]: E0228 04:55:13.323984 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67a474e26017811b8b2f144f6ffc1edc6f29d46325864ccbd6f50d68690545c4\": container with ID starting with 67a474e26017811b8b2f144f6ffc1edc6f29d46325864ccbd6f50d68690545c4 not found: ID does not exist" containerID="67a474e26017811b8b2f144f6ffc1edc6f29d46325864ccbd6f50d68690545c4" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.324007 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67a474e26017811b8b2f144f6ffc1edc6f29d46325864ccbd6f50d68690545c4"} err="failed to get container status \"67a474e26017811b8b2f144f6ffc1edc6f29d46325864ccbd6f50d68690545c4\": rpc error: code = NotFound desc = could not find container \"67a474e26017811b8b2f144f6ffc1edc6f29d46325864ccbd6f50d68690545c4\": container with ID starting with 67a474e26017811b8b2f144f6ffc1edc6f29d46325864ccbd6f50d68690545c4 not found: ID does not exist" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.324048 5014 scope.go:117] "RemoveContainer" containerID="4c0f10aeb16b7aaa45988aafa614ecf913757afc8b3419dd2de0520a9c7fbfde" Feb 28 04:55:13 crc kubenswrapper[5014]: E0228 04:55:13.324417 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c0f10aeb16b7aaa45988aafa614ecf913757afc8b3419dd2de0520a9c7fbfde\": container with ID starting with 4c0f10aeb16b7aaa45988aafa614ecf913757afc8b3419dd2de0520a9c7fbfde not found: ID does not exist" containerID="4c0f10aeb16b7aaa45988aafa614ecf913757afc8b3419dd2de0520a9c7fbfde" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.324478 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c0f10aeb16b7aaa45988aafa614ecf913757afc8b3419dd2de0520a9c7fbfde"} err="failed to get container status \"4c0f10aeb16b7aaa45988aafa614ecf913757afc8b3419dd2de0520a9c7fbfde\": rpc error: code = NotFound desc = could not find container \"4c0f10aeb16b7aaa45988aafa614ecf913757afc8b3419dd2de0520a9c7fbfde\": container with ID starting with 4c0f10aeb16b7aaa45988aafa614ecf913757afc8b3419dd2de0520a9c7fbfde not found: ID does not exist" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.324492 5014 scope.go:117] "RemoveContainer" containerID="e3b91abbdb2318f520d6eacd9caee8807a09e06b5a007812abe2070639d1606b" Feb 28 04:55:13 crc kubenswrapper[5014]: E0228 04:55:13.324788 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3b91abbdb2318f520d6eacd9caee8807a09e06b5a007812abe2070639d1606b\": container with ID starting with e3b91abbdb2318f520d6eacd9caee8807a09e06b5a007812abe2070639d1606b not found: ID does not exist" containerID="e3b91abbdb2318f520d6eacd9caee8807a09e06b5a007812abe2070639d1606b" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.324878 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3b91abbdb2318f520d6eacd9caee8807a09e06b5a007812abe2070639d1606b"} err="failed to get container status \"e3b91abbdb2318f520d6eacd9caee8807a09e06b5a007812abe2070639d1606b\": rpc error: code = NotFound desc = could not find container \"e3b91abbdb2318f520d6eacd9caee8807a09e06b5a007812abe2070639d1606b\": container with ID starting with e3b91abbdb2318f520d6eacd9caee8807a09e06b5a007812abe2070639d1606b not found: ID does not exist" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.400666 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66f74e9e-e211-467c-a1a4-93a01ff93dd1-log-httpd\") pod \"ceilometer-0\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " pod="openstack/ceilometer-0" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.400844 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66f74e9e-e211-467c-a1a4-93a01ff93dd1-config-data\") pod \"ceilometer-0\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " pod="openstack/ceilometer-0" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.400872 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66f74e9e-e211-467c-a1a4-93a01ff93dd1-scripts\") pod \"ceilometer-0\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " pod="openstack/ceilometer-0" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.400956 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d45n9\" (UniqueName: \"kubernetes.io/projected/66f74e9e-e211-467c-a1a4-93a01ff93dd1-kube-api-access-d45n9\") pod \"ceilometer-0\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " pod="openstack/ceilometer-0" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.401004 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66f74e9e-e211-467c-a1a4-93a01ff93dd1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " pod="openstack/ceilometer-0" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.401091 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66f74e9e-e211-467c-a1a4-93a01ff93dd1-run-httpd\") pod \"ceilometer-0\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " pod="openstack/ceilometer-0" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.401112 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66f74e9e-e211-467c-a1a4-93a01ff93dd1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " pod="openstack/ceilometer-0" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.402610 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66f74e9e-e211-467c-a1a4-93a01ff93dd1-run-httpd\") pod \"ceilometer-0\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " pod="openstack/ceilometer-0" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.402834 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66f74e9e-e211-467c-a1a4-93a01ff93dd1-log-httpd\") pod \"ceilometer-0\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " pod="openstack/ceilometer-0" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.406778 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66f74e9e-e211-467c-a1a4-93a01ff93dd1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " pod="openstack/ceilometer-0" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.407294 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66f74e9e-e211-467c-a1a4-93a01ff93dd1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " pod="openstack/ceilometer-0" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.407847 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66f74e9e-e211-467c-a1a4-93a01ff93dd1-config-data\") pod \"ceilometer-0\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " pod="openstack/ceilometer-0" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.408523 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66f74e9e-e211-467c-a1a4-93a01ff93dd1-scripts\") pod \"ceilometer-0\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " pod="openstack/ceilometer-0" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.420467 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d45n9\" (UniqueName: \"kubernetes.io/projected/66f74e9e-e211-467c-a1a4-93a01ff93dd1-kube-api-access-d45n9\") pod \"ceilometer-0\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " pod="openstack/ceilometer-0" Feb 28 04:55:13 crc kubenswrapper[5014]: I0228 04:55:13.603320 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:55:14 crc kubenswrapper[5014]: I0228 04:55:14.109865 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:55:14 crc kubenswrapper[5014]: W0228 04:55:14.115078 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod66f74e9e_e211_467c_a1a4_93a01ff93dd1.slice/crio-8e77a3f0119574315e0938fb575bebe7653432e52b5efef6b4e9ee96366cb950 WatchSource:0}: Error finding container 8e77a3f0119574315e0938fb575bebe7653432e52b5efef6b4e9ee96366cb950: Status 404 returned error can't find the container with id 8e77a3f0119574315e0938fb575bebe7653432e52b5efef6b4e9ee96366cb950 Feb 28 04:55:14 crc kubenswrapper[5014]: I0228 04:55:14.199939 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b052d62-5806-4883-877c-e88c7d7deedc" path="/var/lib/kubelet/pods/2b052d62-5806-4883-877c-e88c7d7deedc/volumes" Feb 28 04:55:14 crc kubenswrapper[5014]: I0228 04:55:14.200885 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66f74e9e-e211-467c-a1a4-93a01ff93dd1","Type":"ContainerStarted","Data":"8e77a3f0119574315e0938fb575bebe7653432e52b5efef6b4e9ee96366cb950"} Feb 28 04:55:15 crc kubenswrapper[5014]: I0228 04:55:15.196554 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66f74e9e-e211-467c-a1a4-93a01ff93dd1","Type":"ContainerStarted","Data":"df2b1fed0b546adf18bd5346ef003f7add4379fdd3ce9c4a4e1102d6504e8cbb"} Feb 28 04:55:16 crc kubenswrapper[5014]: I0228 04:55:16.214474 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66f74e9e-e211-467c-a1a4-93a01ff93dd1","Type":"ContainerStarted","Data":"b064ffa2970fac4c6c85bac6219a8a5822bfbb6c85df40e07e8d32d5afe5244a"} Feb 28 04:55:17 crc kubenswrapper[5014]: I0228 04:55:17.235091 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66f74e9e-e211-467c-a1a4-93a01ff93dd1","Type":"ContainerStarted","Data":"d5aababe7bdfb415d477854d0ce21bbc6bc6951eef00c94f73b554db95872510"} Feb 28 04:55:19 crc kubenswrapper[5014]: I0228 04:55:19.260856 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66f74e9e-e211-467c-a1a4-93a01ff93dd1","Type":"ContainerStarted","Data":"838d192b24556a3c9deb83806ca8561b630f516a6b4db2006248bd85156badaa"} Feb 28 04:55:19 crc kubenswrapper[5014]: I0228 04:55:19.263954 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 04:55:19 crc kubenswrapper[5014]: I0228 04:55:19.298034 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.089037273 podStartE2EDuration="6.298011692s" podCreationTimestamp="2026-02-28 04:55:13 +0000 UTC" firstStartedPulling="2026-02-28 04:55:14.119430963 +0000 UTC m=+1302.789556903" lastFinishedPulling="2026-02-28 04:55:18.328405372 +0000 UTC m=+1306.998531322" observedRunningTime="2026-02-28 04:55:19.294637852 +0000 UTC m=+1307.964763812" watchObservedRunningTime="2026-02-28 04:55:19.298011692 +0000 UTC m=+1307.968137602" Feb 28 04:55:20 crc kubenswrapper[5014]: I0228 04:55:20.650080 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.245887 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-c82cz"] Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.263133 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c82cz" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.266478 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.266887 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.273603 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-c82cz"] Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.364717 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.366296 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.374771 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.382584 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.457414 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c2c7b5d-d778-4d96-a6fb-171203f594d8-config-data\") pod \"nova-cell0-cell-mapping-c82cz\" (UID: \"3c2c7b5d-d778-4d96-a6fb-171203f594d8\") " pod="openstack/nova-cell0-cell-mapping-c82cz" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.457458 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77tm2\" (UniqueName: \"kubernetes.io/projected/3c2c7b5d-d778-4d96-a6fb-171203f594d8-kube-api-access-77tm2\") pod \"nova-cell0-cell-mapping-c82cz\" (UID: \"3c2c7b5d-d778-4d96-a6fb-171203f594d8\") " pod="openstack/nova-cell0-cell-mapping-c82cz" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.457497 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c2c7b5d-d778-4d96-a6fb-171203f594d8-scripts\") pod \"nova-cell0-cell-mapping-c82cz\" (UID: \"3c2c7b5d-d778-4d96-a6fb-171203f594d8\") " pod="openstack/nova-cell0-cell-mapping-c82cz" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.457593 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2c7b5d-d778-4d96-a6fb-171203f594d8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-c82cz\" (UID: \"3c2c7b5d-d778-4d96-a6fb-171203f594d8\") " pod="openstack/nova-cell0-cell-mapping-c82cz" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.477011 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-b7gl9"] Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.478444 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.512716 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-b7gl9"] Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.532775 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.534191 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.537202 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.552673 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.562317 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2c7b5d-d778-4d96-a6fb-171203f594d8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-c82cz\" (UID: \"3c2c7b5d-d778-4d96-a6fb-171203f594d8\") " pod="openstack/nova-cell0-cell-mapping-c82cz" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.562381 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4164fc18-910b-4bef-9be7-a6d5f9d1004e-logs\") pod \"nova-metadata-0\" (UID: \"4164fc18-910b-4bef-9be7-a6d5f9d1004e\") " pod="openstack/nova-metadata-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.562448 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c2c7b5d-d778-4d96-a6fb-171203f594d8-config-data\") pod \"nova-cell0-cell-mapping-c82cz\" (UID: \"3c2c7b5d-d778-4d96-a6fb-171203f594d8\") " pod="openstack/nova-cell0-cell-mapping-c82cz" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.562469 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77tm2\" (UniqueName: \"kubernetes.io/projected/3c2c7b5d-d778-4d96-a6fb-171203f594d8-kube-api-access-77tm2\") pod \"nova-cell0-cell-mapping-c82cz\" (UID: \"3c2c7b5d-d778-4d96-a6fb-171203f594d8\") " pod="openstack/nova-cell0-cell-mapping-c82cz" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.562486 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c2c7b5d-d778-4d96-a6fb-171203f594d8-scripts\") pod \"nova-cell0-cell-mapping-c82cz\" (UID: \"3c2c7b5d-d778-4d96-a6fb-171203f594d8\") " pod="openstack/nova-cell0-cell-mapping-c82cz" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.562503 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2dvf\" (UniqueName: \"kubernetes.io/projected/4164fc18-910b-4bef-9be7-a6d5f9d1004e-kube-api-access-r2dvf\") pod \"nova-metadata-0\" (UID: \"4164fc18-910b-4bef-9be7-a6d5f9d1004e\") " pod="openstack/nova-metadata-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.562550 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4164fc18-910b-4bef-9be7-a6d5f9d1004e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4164fc18-910b-4bef-9be7-a6d5f9d1004e\") " pod="openstack/nova-metadata-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.562569 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4164fc18-910b-4bef-9be7-a6d5f9d1004e-config-data\") pod \"nova-metadata-0\" (UID: \"4164fc18-910b-4bef-9be7-a6d5f9d1004e\") " pod="openstack/nova-metadata-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.570176 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c2c7b5d-d778-4d96-a6fb-171203f594d8-config-data\") pod \"nova-cell0-cell-mapping-c82cz\" (UID: \"3c2c7b5d-d778-4d96-a6fb-171203f594d8\") " pod="openstack/nova-cell0-cell-mapping-c82cz" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.571390 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c2c7b5d-d778-4d96-a6fb-171203f594d8-scripts\") pod \"nova-cell0-cell-mapping-c82cz\" (UID: \"3c2c7b5d-d778-4d96-a6fb-171203f594d8\") " pod="openstack/nova-cell0-cell-mapping-c82cz" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.597395 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2c7b5d-d778-4d96-a6fb-171203f594d8-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-c82cz\" (UID: \"3c2c7b5d-d778-4d96-a6fb-171203f594d8\") " pod="openstack/nova-cell0-cell-mapping-c82cz" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.619509 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77tm2\" (UniqueName: \"kubernetes.io/projected/3c2c7b5d-d778-4d96-a6fb-171203f594d8-kube-api-access-77tm2\") pod \"nova-cell0-cell-mapping-c82cz\" (UID: \"3c2c7b5d-d778-4d96-a6fb-171203f594d8\") " pod="openstack/nova-cell0-cell-mapping-c82cz" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.643326 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.645096 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.661744 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.665009 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4164fc18-910b-4bef-9be7-a6d5f9d1004e-logs\") pod \"nova-metadata-0\" (UID: \"4164fc18-910b-4bef-9be7-a6d5f9d1004e\") " pod="openstack/nova-metadata-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.665052 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-b7gl9\" (UID: \"facf8396-8625-4f68-9167-be011dd01a6b\") " pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.665096 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-config\") pod \"dnsmasq-dns-757b4f8459-b7gl9\" (UID: \"facf8396-8625-4f68-9167-be011dd01a6b\") " pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.665140 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-b7gl9\" (UID: \"facf8396-8625-4f68-9167-be011dd01a6b\") " pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.665170 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2dvf\" (UniqueName: \"kubernetes.io/projected/4164fc18-910b-4bef-9be7-a6d5f9d1004e-kube-api-access-r2dvf\") pod \"nova-metadata-0\" (UID: \"4164fc18-910b-4bef-9be7-a6d5f9d1004e\") " pod="openstack/nova-metadata-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.665204 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2600df-f028-4e93-82c5-c25cb1112ffb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4c2600df-f028-4e93-82c5-c25cb1112ffb\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.665222 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2600df-f028-4e93-82c5-c25cb1112ffb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4c2600df-f028-4e93-82c5-c25cb1112ffb\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.665238 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-b7gl9\" (UID: \"facf8396-8625-4f68-9167-be011dd01a6b\") " pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.665260 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-dns-svc\") pod \"dnsmasq-dns-757b4f8459-b7gl9\" (UID: \"facf8396-8625-4f68-9167-be011dd01a6b\") " pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.665281 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4164fc18-910b-4bef-9be7-a6d5f9d1004e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4164fc18-910b-4bef-9be7-a6d5f9d1004e\") " pod="openstack/nova-metadata-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.665302 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4164fc18-910b-4bef-9be7-a6d5f9d1004e-config-data\") pod \"nova-metadata-0\" (UID: \"4164fc18-910b-4bef-9be7-a6d5f9d1004e\") " pod="openstack/nova-metadata-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.665318 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6gl9\" (UniqueName: \"kubernetes.io/projected/facf8396-8625-4f68-9167-be011dd01a6b-kube-api-access-t6gl9\") pod \"dnsmasq-dns-757b4f8459-b7gl9\" (UID: \"facf8396-8625-4f68-9167-be011dd01a6b\") " pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.665389 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhqx8\" (UniqueName: \"kubernetes.io/projected/4c2600df-f028-4e93-82c5-c25cb1112ffb-kube-api-access-vhqx8\") pod \"nova-cell1-novncproxy-0\" (UID: \"4c2600df-f028-4e93-82c5-c25cb1112ffb\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.665773 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4164fc18-910b-4bef-9be7-a6d5f9d1004e-logs\") pod \"nova-metadata-0\" (UID: \"4164fc18-910b-4bef-9be7-a6d5f9d1004e\") " pod="openstack/nova-metadata-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.666543 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.668140 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.679052 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4164fc18-910b-4bef-9be7-a6d5f9d1004e-config-data\") pod \"nova-metadata-0\" (UID: \"4164fc18-910b-4bef-9be7-a6d5f9d1004e\") " pod="openstack/nova-metadata-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.679459 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.679522 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.680057 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4164fc18-910b-4bef-9be7-a6d5f9d1004e-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4164fc18-910b-4bef-9be7-a6d5f9d1004e\") " pod="openstack/nova-metadata-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.685450 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.691019 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2dvf\" (UniqueName: \"kubernetes.io/projected/4164fc18-910b-4bef-9be7-a6d5f9d1004e-kube-api-access-r2dvf\") pod \"nova-metadata-0\" (UID: \"4164fc18-910b-4bef-9be7-a6d5f9d1004e\") " pod="openstack/nova-metadata-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.767030 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-b7gl9\" (UID: \"facf8396-8625-4f68-9167-be011dd01a6b\") " pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.767128 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/571ccc83-9293-4ac8-bc08-6b659925845e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"571ccc83-9293-4ac8-bc08-6b659925845e\") " pod="openstack/nova-api-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.767211 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/571ccc83-9293-4ac8-bc08-6b659925845e-config-data\") pod \"nova-api-0\" (UID: \"571ccc83-9293-4ac8-bc08-6b659925845e\") " pod="openstack/nova-api-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.767275 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhtxp\" (UniqueName: \"kubernetes.io/projected/c50de725-9c5d-4801-8163-c4382a024617-kube-api-access-bhtxp\") pod \"nova-scheduler-0\" (UID: \"c50de725-9c5d-4801-8163-c4382a024617\") " pod="openstack/nova-scheduler-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.767317 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q54t2\" (UniqueName: \"kubernetes.io/projected/571ccc83-9293-4ac8-bc08-6b659925845e-kube-api-access-q54t2\") pod \"nova-api-0\" (UID: \"571ccc83-9293-4ac8-bc08-6b659925845e\") " pod="openstack/nova-api-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.767343 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2600df-f028-4e93-82c5-c25cb1112ffb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4c2600df-f028-4e93-82c5-c25cb1112ffb\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.767368 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2600df-f028-4e93-82c5-c25cb1112ffb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4c2600df-f028-4e93-82c5-c25cb1112ffb\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.767394 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-b7gl9\" (UID: \"facf8396-8625-4f68-9167-be011dd01a6b\") " pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.767444 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-dns-svc\") pod \"dnsmasq-dns-757b4f8459-b7gl9\" (UID: \"facf8396-8625-4f68-9167-be011dd01a6b\") " pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.767475 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6gl9\" (UniqueName: \"kubernetes.io/projected/facf8396-8625-4f68-9167-be011dd01a6b-kube-api-access-t6gl9\") pod \"dnsmasq-dns-757b4f8459-b7gl9\" (UID: \"facf8396-8625-4f68-9167-be011dd01a6b\") " pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.767529 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c50de725-9c5d-4801-8163-c4382a024617-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c50de725-9c5d-4801-8163-c4382a024617\") " pod="openstack/nova-scheduler-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.767596 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/571ccc83-9293-4ac8-bc08-6b659925845e-logs\") pod \"nova-api-0\" (UID: \"571ccc83-9293-4ac8-bc08-6b659925845e\") " pod="openstack/nova-api-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.767633 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhqx8\" (UniqueName: \"kubernetes.io/projected/4c2600df-f028-4e93-82c5-c25cb1112ffb-kube-api-access-vhqx8\") pod \"nova-cell1-novncproxy-0\" (UID: \"4c2600df-f028-4e93-82c5-c25cb1112ffb\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.767663 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-b7gl9\" (UID: \"facf8396-8625-4f68-9167-be011dd01a6b\") " pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.767712 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-config\") pod \"dnsmasq-dns-757b4f8459-b7gl9\" (UID: \"facf8396-8625-4f68-9167-be011dd01a6b\") " pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.767744 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c50de725-9c5d-4801-8163-c4382a024617-config-data\") pod \"nova-scheduler-0\" (UID: \"c50de725-9c5d-4801-8163-c4382a024617\") " pod="openstack/nova-scheduler-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.768255 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-ovsdbserver-sb\") pod \"dnsmasq-dns-757b4f8459-b7gl9\" (UID: \"facf8396-8625-4f68-9167-be011dd01a6b\") " pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.768929 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-dns-swift-storage-0\") pod \"dnsmasq-dns-757b4f8459-b7gl9\" (UID: \"facf8396-8625-4f68-9167-be011dd01a6b\") " pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.768972 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-dns-svc\") pod \"dnsmasq-dns-757b4f8459-b7gl9\" (UID: \"facf8396-8625-4f68-9167-be011dd01a6b\") " pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.769520 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-ovsdbserver-nb\") pod \"dnsmasq-dns-757b4f8459-b7gl9\" (UID: \"facf8396-8625-4f68-9167-be011dd01a6b\") " pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.769545 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-config\") pod \"dnsmasq-dns-757b4f8459-b7gl9\" (UID: \"facf8396-8625-4f68-9167-be011dd01a6b\") " pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.770649 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2600df-f028-4e93-82c5-c25cb1112ffb-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4c2600df-f028-4e93-82c5-c25cb1112ffb\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.773321 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2600df-f028-4e93-82c5-c25cb1112ffb-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4c2600df-f028-4e93-82c5-c25cb1112ffb\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.783618 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6gl9\" (UniqueName: \"kubernetes.io/projected/facf8396-8625-4f68-9167-be011dd01a6b-kube-api-access-t6gl9\") pod \"dnsmasq-dns-757b4f8459-b7gl9\" (UID: \"facf8396-8625-4f68-9167-be011dd01a6b\") " pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.783962 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhqx8\" (UniqueName: \"kubernetes.io/projected/4c2600df-f028-4e93-82c5-c25cb1112ffb-kube-api-access-vhqx8\") pod \"nova-cell1-novncproxy-0\" (UID: \"4c2600df-f028-4e93-82c5-c25cb1112ffb\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.806163 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.853856 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.869493 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c50de725-9c5d-4801-8163-c4382a024617-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c50de725-9c5d-4801-8163-c4382a024617\") " pod="openstack/nova-scheduler-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.869581 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/571ccc83-9293-4ac8-bc08-6b659925845e-logs\") pod \"nova-api-0\" (UID: \"571ccc83-9293-4ac8-bc08-6b659925845e\") " pod="openstack/nova-api-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.869647 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c50de725-9c5d-4801-8163-c4382a024617-config-data\") pod \"nova-scheduler-0\" (UID: \"c50de725-9c5d-4801-8163-c4382a024617\") " pod="openstack/nova-scheduler-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.869698 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/571ccc83-9293-4ac8-bc08-6b659925845e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"571ccc83-9293-4ac8-bc08-6b659925845e\") " pod="openstack/nova-api-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.869722 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/571ccc83-9293-4ac8-bc08-6b659925845e-config-data\") pod \"nova-api-0\" (UID: \"571ccc83-9293-4ac8-bc08-6b659925845e\") " pod="openstack/nova-api-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.869759 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhtxp\" (UniqueName: \"kubernetes.io/projected/c50de725-9c5d-4801-8163-c4382a024617-kube-api-access-bhtxp\") pod \"nova-scheduler-0\" (UID: \"c50de725-9c5d-4801-8163-c4382a024617\") " pod="openstack/nova-scheduler-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.869788 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q54t2\" (UniqueName: \"kubernetes.io/projected/571ccc83-9293-4ac8-bc08-6b659925845e-kube-api-access-q54t2\") pod \"nova-api-0\" (UID: \"571ccc83-9293-4ac8-bc08-6b659925845e\") " pod="openstack/nova-api-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.870978 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/571ccc83-9293-4ac8-bc08-6b659925845e-logs\") pod \"nova-api-0\" (UID: \"571ccc83-9293-4ac8-bc08-6b659925845e\") " pod="openstack/nova-api-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.873079 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c50de725-9c5d-4801-8163-c4382a024617-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c50de725-9c5d-4801-8163-c4382a024617\") " pod="openstack/nova-scheduler-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.873614 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/571ccc83-9293-4ac8-bc08-6b659925845e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"571ccc83-9293-4ac8-bc08-6b659925845e\") " pod="openstack/nova-api-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.873779 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c50de725-9c5d-4801-8163-c4382a024617-config-data\") pod \"nova-scheduler-0\" (UID: \"c50de725-9c5d-4801-8163-c4382a024617\") " pod="openstack/nova-scheduler-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.876314 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/571ccc83-9293-4ac8-bc08-6b659925845e-config-data\") pod \"nova-api-0\" (UID: \"571ccc83-9293-4ac8-bc08-6b659925845e\") " pod="openstack/nova-api-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.891947 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q54t2\" (UniqueName: \"kubernetes.io/projected/571ccc83-9293-4ac8-bc08-6b659925845e-kube-api-access-q54t2\") pod \"nova-api-0\" (UID: \"571ccc83-9293-4ac8-bc08-6b659925845e\") " pod="openstack/nova-api-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.894560 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c82cz" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.905032 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhtxp\" (UniqueName: \"kubernetes.io/projected/c50de725-9c5d-4801-8163-c4382a024617-kube-api-access-bhtxp\") pod \"nova-scheduler-0\" (UID: \"c50de725-9c5d-4801-8163-c4382a024617\") " pod="openstack/nova-scheduler-0" Feb 28 04:55:21 crc kubenswrapper[5014]: I0228 04:55:21.986137 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.060668 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.069533 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.335959 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-b7gl9"] Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.418317 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lp5x6"] Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.419469 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lp5x6" Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.422844 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.423376 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.427002 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lp5x6"] Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.594184 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-c82cz"] Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.595644 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5-config-data\") pod \"nova-cell1-conductor-db-sync-lp5x6\" (UID: \"81e1c9f9-6a89-4ff4-8075-7d737bd42ec5\") " pod="openstack/nova-cell1-conductor-db-sync-lp5x6" Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.595982 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lp5x6\" (UID: \"81e1c9f9-6a89-4ff4-8075-7d737bd42ec5\") " pod="openstack/nova-cell1-conductor-db-sync-lp5x6" Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.596043 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkv57\" (UniqueName: \"kubernetes.io/projected/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5-kube-api-access-qkv57\") pod \"nova-cell1-conductor-db-sync-lp5x6\" (UID: \"81e1c9f9-6a89-4ff4-8075-7d737bd42ec5\") " pod="openstack/nova-cell1-conductor-db-sync-lp5x6" Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.596061 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5-scripts\") pod \"nova-cell1-conductor-db-sync-lp5x6\" (UID: \"81e1c9f9-6a89-4ff4-8075-7d737bd42ec5\") " pod="openstack/nova-cell1-conductor-db-sync-lp5x6" Feb 28 04:55:22 crc kubenswrapper[5014]: W0228 04:55:22.600231 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c2c7b5d_d778_4d96_a6fb_171203f594d8.slice/crio-46778856ad3102e25d0220e3fb635e03265f251baa08a1a6a6e77f4fc9c93374 WatchSource:0}: Error finding container 46778856ad3102e25d0220e3fb635e03265f251baa08a1a6a6e77f4fc9c93374: Status 404 returned error can't find the container with id 46778856ad3102e25d0220e3fb635e03265f251baa08a1a6a6e77f4fc9c93374 Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.610825 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.698485 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lp5x6\" (UID: \"81e1c9f9-6a89-4ff4-8075-7d737bd42ec5\") " pod="openstack/nova-cell1-conductor-db-sync-lp5x6" Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.700272 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkv57\" (UniqueName: \"kubernetes.io/projected/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5-kube-api-access-qkv57\") pod \"nova-cell1-conductor-db-sync-lp5x6\" (UID: \"81e1c9f9-6a89-4ff4-8075-7d737bd42ec5\") " pod="openstack/nova-cell1-conductor-db-sync-lp5x6" Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.700323 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5-scripts\") pod \"nova-cell1-conductor-db-sync-lp5x6\" (UID: \"81e1c9f9-6a89-4ff4-8075-7d737bd42ec5\") " pod="openstack/nova-cell1-conductor-db-sync-lp5x6" Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.700493 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5-config-data\") pod \"nova-cell1-conductor-db-sync-lp5x6\" (UID: \"81e1c9f9-6a89-4ff4-8075-7d737bd42ec5\") " pod="openstack/nova-cell1-conductor-db-sync-lp5x6" Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.711705 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lp5x6\" (UID: \"81e1c9f9-6a89-4ff4-8075-7d737bd42ec5\") " pod="openstack/nova-cell1-conductor-db-sync-lp5x6" Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.711718 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5-config-data\") pod \"nova-cell1-conductor-db-sync-lp5x6\" (UID: \"81e1c9f9-6a89-4ff4-8075-7d737bd42ec5\") " pod="openstack/nova-cell1-conductor-db-sync-lp5x6" Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.712333 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5-scripts\") pod \"nova-cell1-conductor-db-sync-lp5x6\" (UID: \"81e1c9f9-6a89-4ff4-8075-7d737bd42ec5\") " pod="openstack/nova-cell1-conductor-db-sync-lp5x6" Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.719447 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkv57\" (UniqueName: \"kubernetes.io/projected/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5-kube-api-access-qkv57\") pod \"nova-cell1-conductor-db-sync-lp5x6\" (UID: \"81e1c9f9-6a89-4ff4-8075-7d737bd42ec5\") " pod="openstack/nova-cell1-conductor-db-sync-lp5x6" Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.757247 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lp5x6" Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.837716 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.934699 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 04:55:22 crc kubenswrapper[5014]: I0228 04:55:22.947182 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 04:55:22 crc kubenswrapper[5014]: W0228 04:55:22.952866 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod571ccc83_9293_4ac8_bc08_6b659925845e.slice/crio-5bb4931e488e89975aa902c3f6751df7ba7954c1a0b4839faa19174e1fb5cd06 WatchSource:0}: Error finding container 5bb4931e488e89975aa902c3f6751df7ba7954c1a0b4839faa19174e1fb5cd06: Status 404 returned error can't find the container with id 5bb4931e488e89975aa902c3f6751df7ba7954c1a0b4839faa19174e1fb5cd06 Feb 28 04:55:23 crc kubenswrapper[5014]: I0228 04:55:23.258162 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lp5x6"] Feb 28 04:55:23 crc kubenswrapper[5014]: W0228 04:55:23.266477 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81e1c9f9_6a89_4ff4_8075_7d737bd42ec5.slice/crio-c8715020ce34b931784b2532e7debf2dbf675e9e8f1e875050ed3f2b0e98e7d6 WatchSource:0}: Error finding container c8715020ce34b931784b2532e7debf2dbf675e9e8f1e875050ed3f2b0e98e7d6: Status 404 returned error can't find the container with id c8715020ce34b931784b2532e7debf2dbf675e9e8f1e875050ed3f2b0e98e7d6 Feb 28 04:55:23 crc kubenswrapper[5014]: I0228 04:55:23.354403 5014 generic.go:334] "Generic (PLEG): container finished" podID="facf8396-8625-4f68-9167-be011dd01a6b" containerID="5ccb2505a1e0aed9b6ef3d2cac84886e163e740d64e7ed4e0c9b8efc2be11d2c" exitCode=0 Feb 28 04:55:23 crc kubenswrapper[5014]: I0228 04:55:23.354469 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" event={"ID":"facf8396-8625-4f68-9167-be011dd01a6b","Type":"ContainerDied","Data":"5ccb2505a1e0aed9b6ef3d2cac84886e163e740d64e7ed4e0c9b8efc2be11d2c"} Feb 28 04:55:23 crc kubenswrapper[5014]: I0228 04:55:23.355023 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" event={"ID":"facf8396-8625-4f68-9167-be011dd01a6b","Type":"ContainerStarted","Data":"9ea617ac903d25bf6ec72f60bc879bed2eba89c58ba6aaab45b57aeddd454bf1"} Feb 28 04:55:23 crc kubenswrapper[5014]: I0228 04:55:23.357466 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4c2600df-f028-4e93-82c5-c25cb1112ffb","Type":"ContainerStarted","Data":"1e1cdc95d010bbf49321e43e8a1a8c04d12443a04f6e25e51f5c33c54f332466"} Feb 28 04:55:23 crc kubenswrapper[5014]: I0228 04:55:23.358432 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"571ccc83-9293-4ac8-bc08-6b659925845e","Type":"ContainerStarted","Data":"5bb4931e488e89975aa902c3f6751df7ba7954c1a0b4839faa19174e1fb5cd06"} Feb 28 04:55:23 crc kubenswrapper[5014]: I0228 04:55:23.363416 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c50de725-9c5d-4801-8163-c4382a024617","Type":"ContainerStarted","Data":"048c82af309e1a5a10a4dd6f1ba2dcab6a8991b5cbf1c9c0cd488a2dc1bfb597"} Feb 28 04:55:23 crc kubenswrapper[5014]: I0228 04:55:23.365337 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c82cz" event={"ID":"3c2c7b5d-d778-4d96-a6fb-171203f594d8","Type":"ContainerStarted","Data":"b1dd5b5ba21cf1804bc869620e9853290a2dddf95a8896d0ee3babae155e8083"} Feb 28 04:55:23 crc kubenswrapper[5014]: I0228 04:55:23.365382 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c82cz" event={"ID":"3c2c7b5d-d778-4d96-a6fb-171203f594d8","Type":"ContainerStarted","Data":"46778856ad3102e25d0220e3fb635e03265f251baa08a1a6a6e77f4fc9c93374"} Feb 28 04:55:23 crc kubenswrapper[5014]: I0228 04:55:23.384574 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4164fc18-910b-4bef-9be7-a6d5f9d1004e","Type":"ContainerStarted","Data":"0dd309f135c020e740c2a31c06c589252734d14947073f3e77d9b9cf1f401fdf"} Feb 28 04:55:23 crc kubenswrapper[5014]: I0228 04:55:23.386007 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lp5x6" event={"ID":"81e1c9f9-6a89-4ff4-8075-7d737bd42ec5","Type":"ContainerStarted","Data":"c8715020ce34b931784b2532e7debf2dbf675e9e8f1e875050ed3f2b0e98e7d6"} Feb 28 04:55:23 crc kubenswrapper[5014]: I0228 04:55:23.401785 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-c82cz" podStartSLOduration=2.401761778 podStartE2EDuration="2.401761778s" podCreationTimestamp="2026-02-28 04:55:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:55:23.38806086 +0000 UTC m=+1312.058186770" watchObservedRunningTime="2026-02-28 04:55:23.401761778 +0000 UTC m=+1312.071887688" Feb 28 04:55:24 crc kubenswrapper[5014]: I0228 04:55:24.399063 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lp5x6" event={"ID":"81e1c9f9-6a89-4ff4-8075-7d737bd42ec5","Type":"ContainerStarted","Data":"f309a6be4a34fc7643f1ea01c54cfa09bfd84d2ee5ea74c1f0a01c7e3de4583c"} Feb 28 04:55:24 crc kubenswrapper[5014]: I0228 04:55:24.405218 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" event={"ID":"facf8396-8625-4f68-9167-be011dd01a6b","Type":"ContainerStarted","Data":"1a95e2c1e3d8200dba02f5832879431e99ebff1b2dd2e907fb6f71067b755341"} Feb 28 04:55:24 crc kubenswrapper[5014]: I0228 04:55:24.405314 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" Feb 28 04:55:24 crc kubenswrapper[5014]: I0228 04:55:24.426577 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-lp5x6" podStartSLOduration=2.426554209 podStartE2EDuration="2.426554209s" podCreationTimestamp="2026-02-28 04:55:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:55:24.416659923 +0000 UTC m=+1313.086785833" watchObservedRunningTime="2026-02-28 04:55:24.426554209 +0000 UTC m=+1313.096680119" Feb 28 04:55:24 crc kubenswrapper[5014]: I0228 04:55:24.436688 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" podStartSLOduration=3.436668699 podStartE2EDuration="3.436668699s" podCreationTimestamp="2026-02-28 04:55:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:55:24.43593775 +0000 UTC m=+1313.106063680" watchObservedRunningTime="2026-02-28 04:55:24.436668699 +0000 UTC m=+1313.106794609" Feb 28 04:55:24 crc kubenswrapper[5014]: I0228 04:55:24.973452 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 04:55:24 crc kubenswrapper[5014]: I0228 04:55:24.993040 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 04:55:26 crc kubenswrapper[5014]: I0228 04:55:26.429106 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4164fc18-910b-4bef-9be7-a6d5f9d1004e","Type":"ContainerStarted","Data":"7220e268e38c8476cf6687e410bd1da561f7d5e7eb64667af3768d2e879678cf"} Feb 28 04:55:26 crc kubenswrapper[5014]: I0228 04:55:26.429621 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4164fc18-910b-4bef-9be7-a6d5f9d1004e","Type":"ContainerStarted","Data":"8c3ecb0cfaee94809753750f3589f3738b60ad980c5e6c6a5b4614fbab5ca089"} Feb 28 04:55:26 crc kubenswrapper[5014]: I0228 04:55:26.429239 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4164fc18-910b-4bef-9be7-a6d5f9d1004e" containerName="nova-metadata-metadata" containerID="cri-o://7220e268e38c8476cf6687e410bd1da561f7d5e7eb64667af3768d2e879678cf" gracePeriod=30 Feb 28 04:55:26 crc kubenswrapper[5014]: I0228 04:55:26.429166 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4164fc18-910b-4bef-9be7-a6d5f9d1004e" containerName="nova-metadata-log" containerID="cri-o://8c3ecb0cfaee94809753750f3589f3738b60ad980c5e6c6a5b4614fbab5ca089" gracePeriod=30 Feb 28 04:55:26 crc kubenswrapper[5014]: I0228 04:55:26.437446 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4c2600df-f028-4e93-82c5-c25cb1112ffb","Type":"ContainerStarted","Data":"be15dc1ba18bd66140d3775f378871bb8f6ffd240268605f8de10a63eedfc9d2"} Feb 28 04:55:26 crc kubenswrapper[5014]: I0228 04:55:26.437617 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="4c2600df-f028-4e93-82c5-c25cb1112ffb" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://be15dc1ba18bd66140d3775f378871bb8f6ffd240268605f8de10a63eedfc9d2" gracePeriod=30 Feb 28 04:55:26 crc kubenswrapper[5014]: I0228 04:55:26.449964 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.735655418 podStartE2EDuration="5.449924616s" podCreationTimestamp="2026-02-28 04:55:21 +0000 UTC" firstStartedPulling="2026-02-28 04:55:22.855139569 +0000 UTC m=+1311.525265479" lastFinishedPulling="2026-02-28 04:55:25.569408767 +0000 UTC m=+1314.239534677" observedRunningTime="2026-02-28 04:55:26.447182173 +0000 UTC m=+1315.117308083" watchObservedRunningTime="2026-02-28 04:55:26.449924616 +0000 UTC m=+1315.120050516" Feb 28 04:55:26 crc kubenswrapper[5014]: I0228 04:55:26.462552 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"571ccc83-9293-4ac8-bc08-6b659925845e","Type":"ContainerStarted","Data":"adae4e3669d5239495a5201e157cd64b3ec98d23e1e520ee7bee8a0c91fe1017"} Feb 28 04:55:26 crc kubenswrapper[5014]: I0228 04:55:26.462606 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"571ccc83-9293-4ac8-bc08-6b659925845e","Type":"ContainerStarted","Data":"a32e77190dcee6357cc24518590a684cc445a736e5d603448cf1d2e7f3ea4c94"} Feb 28 04:55:26 crc kubenswrapper[5014]: I0228 04:55:26.468673 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c50de725-9c5d-4801-8163-c4382a024617","Type":"ContainerStarted","Data":"be4229f34fdc880a55ee26d5c07982d79b21da3608629f5934b146b0b47f97d5"} Feb 28 04:55:26 crc kubenswrapper[5014]: I0228 04:55:26.470869 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.529058423 podStartE2EDuration="5.470850188s" podCreationTimestamp="2026-02-28 04:55:21 +0000 UTC" firstStartedPulling="2026-02-28 04:55:22.627024327 +0000 UTC m=+1311.297150237" lastFinishedPulling="2026-02-28 04:55:25.568816092 +0000 UTC m=+1314.238942002" observedRunningTime="2026-02-28 04:55:26.464163828 +0000 UTC m=+1315.134289748" watchObservedRunningTime="2026-02-28 04:55:26.470850188 +0000 UTC m=+1315.140976098" Feb 28 04:55:26 crc kubenswrapper[5014]: I0228 04:55:26.500227 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.888726316 podStartE2EDuration="5.500202726s" podCreationTimestamp="2026-02-28 04:55:21 +0000 UTC" firstStartedPulling="2026-02-28 04:55:22.967598517 +0000 UTC m=+1311.637724427" lastFinishedPulling="2026-02-28 04:55:25.579074927 +0000 UTC m=+1314.249200837" observedRunningTime="2026-02-28 04:55:26.490338351 +0000 UTC m=+1315.160464261" watchObservedRunningTime="2026-02-28 04:55:26.500202726 +0000 UTC m=+1315.170328636" Feb 28 04:55:26 crc kubenswrapper[5014]: I0228 04:55:26.511552 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.819828186 podStartE2EDuration="5.511526249s" podCreationTimestamp="2026-02-28 04:55:21 +0000 UTC" firstStartedPulling="2026-02-28 04:55:22.95766409 +0000 UTC m=+1311.627790000" lastFinishedPulling="2026-02-28 04:55:25.649362153 +0000 UTC m=+1314.319488063" observedRunningTime="2026-02-28 04:55:26.506235187 +0000 UTC m=+1315.176361107" watchObservedRunningTime="2026-02-28 04:55:26.511526249 +0000 UTC m=+1315.181652159" Feb 28 04:55:26 crc kubenswrapper[5014]: I0228 04:55:26.854739 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:26 crc kubenswrapper[5014]: I0228 04:55:26.987288 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 04:55:26 crc kubenswrapper[5014]: I0228 04:55:26.987342 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.062311 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.329458 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.491342 5014 generic.go:334] "Generic (PLEG): container finished" podID="4164fc18-910b-4bef-9be7-a6d5f9d1004e" containerID="7220e268e38c8476cf6687e410bd1da561f7d5e7eb64667af3768d2e879678cf" exitCode=0 Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.491379 5014 generic.go:334] "Generic (PLEG): container finished" podID="4164fc18-910b-4bef-9be7-a6d5f9d1004e" containerID="8c3ecb0cfaee94809753750f3589f3738b60ad980c5e6c6a5b4614fbab5ca089" exitCode=143 Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.491436 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4164fc18-910b-4bef-9be7-a6d5f9d1004e","Type":"ContainerDied","Data":"7220e268e38c8476cf6687e410bd1da561f7d5e7eb64667af3768d2e879678cf"} Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.491470 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.491494 5014 scope.go:117] "RemoveContainer" containerID="7220e268e38c8476cf6687e410bd1da561f7d5e7eb64667af3768d2e879678cf" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.491479 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4164fc18-910b-4bef-9be7-a6d5f9d1004e","Type":"ContainerDied","Data":"8c3ecb0cfaee94809753750f3589f3738b60ad980c5e6c6a5b4614fbab5ca089"} Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.491543 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4164fc18-910b-4bef-9be7-a6d5f9d1004e","Type":"ContainerDied","Data":"0dd309f135c020e740c2a31c06c589252734d14947073f3e77d9b9cf1f401fdf"} Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.512960 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2dvf\" (UniqueName: \"kubernetes.io/projected/4164fc18-910b-4bef-9be7-a6d5f9d1004e-kube-api-access-r2dvf\") pod \"4164fc18-910b-4bef-9be7-a6d5f9d1004e\" (UID: \"4164fc18-910b-4bef-9be7-a6d5f9d1004e\") " Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.513136 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4164fc18-910b-4bef-9be7-a6d5f9d1004e-combined-ca-bundle\") pod \"4164fc18-910b-4bef-9be7-a6d5f9d1004e\" (UID: \"4164fc18-910b-4bef-9be7-a6d5f9d1004e\") " Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.513232 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4164fc18-910b-4bef-9be7-a6d5f9d1004e-config-data\") pod \"4164fc18-910b-4bef-9be7-a6d5f9d1004e\" (UID: \"4164fc18-910b-4bef-9be7-a6d5f9d1004e\") " Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.513791 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4164fc18-910b-4bef-9be7-a6d5f9d1004e-logs\") pod \"4164fc18-910b-4bef-9be7-a6d5f9d1004e\" (UID: \"4164fc18-910b-4bef-9be7-a6d5f9d1004e\") " Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.515054 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4164fc18-910b-4bef-9be7-a6d5f9d1004e-logs" (OuterVolumeSpecName: "logs") pod "4164fc18-910b-4bef-9be7-a6d5f9d1004e" (UID: "4164fc18-910b-4bef-9be7-a6d5f9d1004e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.519002 5014 scope.go:117] "RemoveContainer" containerID="8c3ecb0cfaee94809753750f3589f3738b60ad980c5e6c6a5b4614fbab5ca089" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.519127 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4164fc18-910b-4bef-9be7-a6d5f9d1004e-kube-api-access-r2dvf" (OuterVolumeSpecName: "kube-api-access-r2dvf") pod "4164fc18-910b-4bef-9be7-a6d5f9d1004e" (UID: "4164fc18-910b-4bef-9be7-a6d5f9d1004e"). InnerVolumeSpecName "kube-api-access-r2dvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.547243 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4164fc18-910b-4bef-9be7-a6d5f9d1004e-config-data" (OuterVolumeSpecName: "config-data") pod "4164fc18-910b-4bef-9be7-a6d5f9d1004e" (UID: "4164fc18-910b-4bef-9be7-a6d5f9d1004e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.549765 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4164fc18-910b-4bef-9be7-a6d5f9d1004e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4164fc18-910b-4bef-9be7-a6d5f9d1004e" (UID: "4164fc18-910b-4bef-9be7-a6d5f9d1004e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.588684 5014 scope.go:117] "RemoveContainer" containerID="7220e268e38c8476cf6687e410bd1da561f7d5e7eb64667af3768d2e879678cf" Feb 28 04:55:27 crc kubenswrapper[5014]: E0228 04:55:27.590499 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7220e268e38c8476cf6687e410bd1da561f7d5e7eb64667af3768d2e879678cf\": container with ID starting with 7220e268e38c8476cf6687e410bd1da561f7d5e7eb64667af3768d2e879678cf not found: ID does not exist" containerID="7220e268e38c8476cf6687e410bd1da561f7d5e7eb64667af3768d2e879678cf" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.590539 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7220e268e38c8476cf6687e410bd1da561f7d5e7eb64667af3768d2e879678cf"} err="failed to get container status \"7220e268e38c8476cf6687e410bd1da561f7d5e7eb64667af3768d2e879678cf\": rpc error: code = NotFound desc = could not find container \"7220e268e38c8476cf6687e410bd1da561f7d5e7eb64667af3768d2e879678cf\": container with ID starting with 7220e268e38c8476cf6687e410bd1da561f7d5e7eb64667af3768d2e879678cf not found: ID does not exist" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.590563 5014 scope.go:117] "RemoveContainer" containerID="8c3ecb0cfaee94809753750f3589f3738b60ad980c5e6c6a5b4614fbab5ca089" Feb 28 04:55:27 crc kubenswrapper[5014]: E0228 04:55:27.590997 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c3ecb0cfaee94809753750f3589f3738b60ad980c5e6c6a5b4614fbab5ca089\": container with ID starting with 8c3ecb0cfaee94809753750f3589f3738b60ad980c5e6c6a5b4614fbab5ca089 not found: ID does not exist" containerID="8c3ecb0cfaee94809753750f3589f3738b60ad980c5e6c6a5b4614fbab5ca089" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.591038 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c3ecb0cfaee94809753750f3589f3738b60ad980c5e6c6a5b4614fbab5ca089"} err="failed to get container status \"8c3ecb0cfaee94809753750f3589f3738b60ad980c5e6c6a5b4614fbab5ca089\": rpc error: code = NotFound desc = could not find container \"8c3ecb0cfaee94809753750f3589f3738b60ad980c5e6c6a5b4614fbab5ca089\": container with ID starting with 8c3ecb0cfaee94809753750f3589f3738b60ad980c5e6c6a5b4614fbab5ca089 not found: ID does not exist" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.591058 5014 scope.go:117] "RemoveContainer" containerID="7220e268e38c8476cf6687e410bd1da561f7d5e7eb64667af3768d2e879678cf" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.591265 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7220e268e38c8476cf6687e410bd1da561f7d5e7eb64667af3768d2e879678cf"} err="failed to get container status \"7220e268e38c8476cf6687e410bd1da561f7d5e7eb64667af3768d2e879678cf\": rpc error: code = NotFound desc = could not find container \"7220e268e38c8476cf6687e410bd1da561f7d5e7eb64667af3768d2e879678cf\": container with ID starting with 7220e268e38c8476cf6687e410bd1da561f7d5e7eb64667af3768d2e879678cf not found: ID does not exist" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.591292 5014 scope.go:117] "RemoveContainer" containerID="8c3ecb0cfaee94809753750f3589f3738b60ad980c5e6c6a5b4614fbab5ca089" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.591644 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c3ecb0cfaee94809753750f3589f3738b60ad980c5e6c6a5b4614fbab5ca089"} err="failed to get container status \"8c3ecb0cfaee94809753750f3589f3738b60ad980c5e6c6a5b4614fbab5ca089\": rpc error: code = NotFound desc = could not find container \"8c3ecb0cfaee94809753750f3589f3738b60ad980c5e6c6a5b4614fbab5ca089\": container with ID starting with 8c3ecb0cfaee94809753750f3589f3738b60ad980c5e6c6a5b4614fbab5ca089 not found: ID does not exist" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.616380 5014 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4164fc18-910b-4bef-9be7-a6d5f9d1004e-logs\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.616767 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r2dvf\" (UniqueName: \"kubernetes.io/projected/4164fc18-910b-4bef-9be7-a6d5f9d1004e-kube-api-access-r2dvf\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.616782 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4164fc18-910b-4bef-9be7-a6d5f9d1004e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.616794 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4164fc18-910b-4bef-9be7-a6d5f9d1004e-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.840664 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.860853 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.873331 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 28 04:55:27 crc kubenswrapper[5014]: E0228 04:55:27.873902 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4164fc18-910b-4bef-9be7-a6d5f9d1004e" containerName="nova-metadata-log" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.873929 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="4164fc18-910b-4bef-9be7-a6d5f9d1004e" containerName="nova-metadata-log" Feb 28 04:55:27 crc kubenswrapper[5014]: E0228 04:55:27.873963 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4164fc18-910b-4bef-9be7-a6d5f9d1004e" containerName="nova-metadata-metadata" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.873974 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="4164fc18-910b-4bef-9be7-a6d5f9d1004e" containerName="nova-metadata-metadata" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.874270 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="4164fc18-910b-4bef-9be7-a6d5f9d1004e" containerName="nova-metadata-metadata" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.874311 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="4164fc18-910b-4bef-9be7-a6d5f9d1004e" containerName="nova-metadata-log" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.875919 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.878675 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.879075 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 28 04:55:27 crc kubenswrapper[5014]: I0228 04:55:27.883125 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 04:55:28 crc kubenswrapper[5014]: I0228 04:55:28.024027 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6cab9e9-ac57-4d68-9276-707427d9e517-logs\") pod \"nova-metadata-0\" (UID: \"d6cab9e9-ac57-4d68-9276-707427d9e517\") " pod="openstack/nova-metadata-0" Feb 28 04:55:28 crc kubenswrapper[5014]: I0228 04:55:28.024104 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c59s5\" (UniqueName: \"kubernetes.io/projected/d6cab9e9-ac57-4d68-9276-707427d9e517-kube-api-access-c59s5\") pod \"nova-metadata-0\" (UID: \"d6cab9e9-ac57-4d68-9276-707427d9e517\") " pod="openstack/nova-metadata-0" Feb 28 04:55:28 crc kubenswrapper[5014]: I0228 04:55:28.024144 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6cab9e9-ac57-4d68-9276-707427d9e517-config-data\") pod \"nova-metadata-0\" (UID: \"d6cab9e9-ac57-4d68-9276-707427d9e517\") " pod="openstack/nova-metadata-0" Feb 28 04:55:28 crc kubenswrapper[5014]: I0228 04:55:28.024264 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6cab9e9-ac57-4d68-9276-707427d9e517-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d6cab9e9-ac57-4d68-9276-707427d9e517\") " pod="openstack/nova-metadata-0" Feb 28 04:55:28 crc kubenswrapper[5014]: I0228 04:55:28.024514 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6cab9e9-ac57-4d68-9276-707427d9e517-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d6cab9e9-ac57-4d68-9276-707427d9e517\") " pod="openstack/nova-metadata-0" Feb 28 04:55:28 crc kubenswrapper[5014]: I0228 04:55:28.126409 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6cab9e9-ac57-4d68-9276-707427d9e517-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d6cab9e9-ac57-4d68-9276-707427d9e517\") " pod="openstack/nova-metadata-0" Feb 28 04:55:28 crc kubenswrapper[5014]: I0228 04:55:28.126801 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6cab9e9-ac57-4d68-9276-707427d9e517-logs\") pod \"nova-metadata-0\" (UID: \"d6cab9e9-ac57-4d68-9276-707427d9e517\") " pod="openstack/nova-metadata-0" Feb 28 04:55:28 crc kubenswrapper[5014]: I0228 04:55:28.126955 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c59s5\" (UniqueName: \"kubernetes.io/projected/d6cab9e9-ac57-4d68-9276-707427d9e517-kube-api-access-c59s5\") pod \"nova-metadata-0\" (UID: \"d6cab9e9-ac57-4d68-9276-707427d9e517\") " pod="openstack/nova-metadata-0" Feb 28 04:55:28 crc kubenswrapper[5014]: I0228 04:55:28.127054 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6cab9e9-ac57-4d68-9276-707427d9e517-config-data\") pod \"nova-metadata-0\" (UID: \"d6cab9e9-ac57-4d68-9276-707427d9e517\") " pod="openstack/nova-metadata-0" Feb 28 04:55:28 crc kubenswrapper[5014]: I0228 04:55:28.127204 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6cab9e9-ac57-4d68-9276-707427d9e517-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d6cab9e9-ac57-4d68-9276-707427d9e517\") " pod="openstack/nova-metadata-0" Feb 28 04:55:28 crc kubenswrapper[5014]: I0228 04:55:28.127221 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6cab9e9-ac57-4d68-9276-707427d9e517-logs\") pod \"nova-metadata-0\" (UID: \"d6cab9e9-ac57-4d68-9276-707427d9e517\") " pod="openstack/nova-metadata-0" Feb 28 04:55:28 crc kubenswrapper[5014]: I0228 04:55:28.132366 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6cab9e9-ac57-4d68-9276-707427d9e517-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d6cab9e9-ac57-4d68-9276-707427d9e517\") " pod="openstack/nova-metadata-0" Feb 28 04:55:28 crc kubenswrapper[5014]: I0228 04:55:28.135486 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6cab9e9-ac57-4d68-9276-707427d9e517-config-data\") pod \"nova-metadata-0\" (UID: \"d6cab9e9-ac57-4d68-9276-707427d9e517\") " pod="openstack/nova-metadata-0" Feb 28 04:55:28 crc kubenswrapper[5014]: I0228 04:55:28.143857 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6cab9e9-ac57-4d68-9276-707427d9e517-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d6cab9e9-ac57-4d68-9276-707427d9e517\") " pod="openstack/nova-metadata-0" Feb 28 04:55:28 crc kubenswrapper[5014]: I0228 04:55:28.154107 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c59s5\" (UniqueName: \"kubernetes.io/projected/d6cab9e9-ac57-4d68-9276-707427d9e517-kube-api-access-c59s5\") pod \"nova-metadata-0\" (UID: \"d6cab9e9-ac57-4d68-9276-707427d9e517\") " pod="openstack/nova-metadata-0" Feb 28 04:55:28 crc kubenswrapper[5014]: I0228 04:55:28.183632 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4164fc18-910b-4bef-9be7-a6d5f9d1004e" path="/var/lib/kubelet/pods/4164fc18-910b-4bef-9be7-a6d5f9d1004e/volumes" Feb 28 04:55:28 crc kubenswrapper[5014]: I0228 04:55:28.252690 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 04:55:28 crc kubenswrapper[5014]: I0228 04:55:28.734152 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 04:55:29 crc kubenswrapper[5014]: I0228 04:55:29.512188 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d6cab9e9-ac57-4d68-9276-707427d9e517","Type":"ContainerStarted","Data":"05c4bf6de818129372ce936e9eb8f6dfe6e9060819ccef6cc8081b761cb4b111"} Feb 28 04:55:29 crc kubenswrapper[5014]: I0228 04:55:29.512456 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d6cab9e9-ac57-4d68-9276-707427d9e517","Type":"ContainerStarted","Data":"c0ba6f265e02ad1f90e34890082c3ed34c3678362f26cb9251e8937d84ead157"} Feb 28 04:55:29 crc kubenswrapper[5014]: I0228 04:55:29.512471 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d6cab9e9-ac57-4d68-9276-707427d9e517","Type":"ContainerStarted","Data":"bc45c4757025e675a7eaacd44e9d1381b1ddc007472f307bfbded655a43e0e66"} Feb 28 04:55:30 crc kubenswrapper[5014]: I0228 04:55:30.524327 5014 generic.go:334] "Generic (PLEG): container finished" podID="81e1c9f9-6a89-4ff4-8075-7d737bd42ec5" containerID="f309a6be4a34fc7643f1ea01c54cfa09bfd84d2ee5ea74c1f0a01c7e3de4583c" exitCode=0 Feb 28 04:55:30 crc kubenswrapper[5014]: I0228 04:55:30.524482 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lp5x6" event={"ID":"81e1c9f9-6a89-4ff4-8075-7d737bd42ec5","Type":"ContainerDied","Data":"f309a6be4a34fc7643f1ea01c54cfa09bfd84d2ee5ea74c1f0a01c7e3de4583c"} Feb 28 04:55:30 crc kubenswrapper[5014]: I0228 04:55:30.528747 5014 generic.go:334] "Generic (PLEG): container finished" podID="3c2c7b5d-d778-4d96-a6fb-171203f594d8" containerID="b1dd5b5ba21cf1804bc869620e9853290a2dddf95a8896d0ee3babae155e8083" exitCode=0 Feb 28 04:55:30 crc kubenswrapper[5014]: I0228 04:55:30.528960 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c82cz" event={"ID":"3c2c7b5d-d778-4d96-a6fb-171203f594d8","Type":"ContainerDied","Data":"b1dd5b5ba21cf1804bc869620e9853290a2dddf95a8896d0ee3babae155e8083"} Feb 28 04:55:30 crc kubenswrapper[5014]: I0228 04:55:30.550534 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.5505062069999997 podStartE2EDuration="3.550506207s" podCreationTimestamp="2026-02-28 04:55:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:55:29.535318614 +0000 UTC m=+1318.205444524" watchObservedRunningTime="2026-02-28 04:55:30.550506207 +0000 UTC m=+1319.220632137" Feb 28 04:55:31 crc kubenswrapper[5014]: I0228 04:55:31.809918 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" Feb 28 04:55:31 crc kubenswrapper[5014]: I0228 04:55:31.879657 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-zpvkp"] Feb 28 04:55:31 crc kubenswrapper[5014]: I0228 04:55:31.879973 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" podUID="3240ff52-33fc-4027-a9ea-f3e17780b320" containerName="dnsmasq-dns" containerID="cri-o://129dec289e5b998d8987fabb65bda9b643da0fa333c464af7bd11e43f49b7fa3" gracePeriod=10 Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.062206 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.070435 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.070489 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.100150 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.184638 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lp5x6" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.186339 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c82cz" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.317591 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c2c7b5d-d778-4d96-a6fb-171203f594d8-scripts\") pod \"3c2c7b5d-d778-4d96-a6fb-171203f594d8\" (UID: \"3c2c7b5d-d778-4d96-a6fb-171203f594d8\") " Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.317635 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77tm2\" (UniqueName: \"kubernetes.io/projected/3c2c7b5d-d778-4d96-a6fb-171203f594d8-kube-api-access-77tm2\") pod \"3c2c7b5d-d778-4d96-a6fb-171203f594d8\" (UID: \"3c2c7b5d-d778-4d96-a6fb-171203f594d8\") " Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.317662 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2c7b5d-d778-4d96-a6fb-171203f594d8-combined-ca-bundle\") pod \"3c2c7b5d-d778-4d96-a6fb-171203f594d8\" (UID: \"3c2c7b5d-d778-4d96-a6fb-171203f594d8\") " Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.317683 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5-config-data\") pod \"81e1c9f9-6a89-4ff4-8075-7d737bd42ec5\" (UID: \"81e1c9f9-6a89-4ff4-8075-7d737bd42ec5\") " Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.317795 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5-scripts\") pod \"81e1c9f9-6a89-4ff4-8075-7d737bd42ec5\" (UID: \"81e1c9f9-6a89-4ff4-8075-7d737bd42ec5\") " Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.317884 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5-combined-ca-bundle\") pod \"81e1c9f9-6a89-4ff4-8075-7d737bd42ec5\" (UID: \"81e1c9f9-6a89-4ff4-8075-7d737bd42ec5\") " Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.317911 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c2c7b5d-d778-4d96-a6fb-171203f594d8-config-data\") pod \"3c2c7b5d-d778-4d96-a6fb-171203f594d8\" (UID: \"3c2c7b5d-d778-4d96-a6fb-171203f594d8\") " Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.317964 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkv57\" (UniqueName: \"kubernetes.io/projected/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5-kube-api-access-qkv57\") pod \"81e1c9f9-6a89-4ff4-8075-7d737bd42ec5\" (UID: \"81e1c9f9-6a89-4ff4-8075-7d737bd42ec5\") " Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.323420 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c2c7b5d-d778-4d96-a6fb-171203f594d8-kube-api-access-77tm2" (OuterVolumeSpecName: "kube-api-access-77tm2") pod "3c2c7b5d-d778-4d96-a6fb-171203f594d8" (UID: "3c2c7b5d-d778-4d96-a6fb-171203f594d8"). InnerVolumeSpecName "kube-api-access-77tm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.329616 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5-kube-api-access-qkv57" (OuterVolumeSpecName: "kube-api-access-qkv57") pod "81e1c9f9-6a89-4ff4-8075-7d737bd42ec5" (UID: "81e1c9f9-6a89-4ff4-8075-7d737bd42ec5"). InnerVolumeSpecName "kube-api-access-qkv57". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.329708 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2c7b5d-d778-4d96-a6fb-171203f594d8-scripts" (OuterVolumeSpecName: "scripts") pod "3c2c7b5d-d778-4d96-a6fb-171203f594d8" (UID: "3c2c7b5d-d778-4d96-a6fb-171203f594d8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.329640 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5-scripts" (OuterVolumeSpecName: "scripts") pod "81e1c9f9-6a89-4ff4-8075-7d737bd42ec5" (UID: "81e1c9f9-6a89-4ff4-8075-7d737bd42ec5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.394195 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5-config-data" (OuterVolumeSpecName: "config-data") pod "81e1c9f9-6a89-4ff4-8075-7d737bd42ec5" (UID: "81e1c9f9-6a89-4ff4-8075-7d737bd42ec5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.394888 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81e1c9f9-6a89-4ff4-8075-7d737bd42ec5" (UID: "81e1c9f9-6a89-4ff4-8075-7d737bd42ec5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.412827 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2c7b5d-d778-4d96-a6fb-171203f594d8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c2c7b5d-d778-4d96-a6fb-171203f594d8" (UID: "3c2c7b5d-d778-4d96-a6fb-171203f594d8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.416143 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.419682 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.419703 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.419712 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkv57\" (UniqueName: \"kubernetes.io/projected/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5-kube-api-access-qkv57\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.419721 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3c2c7b5d-d778-4d96-a6fb-171203f594d8-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.419730 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77tm2\" (UniqueName: \"kubernetes.io/projected/3c2c7b5d-d778-4d96-a6fb-171203f594d8-kube-api-access-77tm2\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.419738 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c2c7b5d-d778-4d96-a6fb-171203f594d8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.419746 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.428896 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2c7b5d-d778-4d96-a6fb-171203f594d8-config-data" (OuterVolumeSpecName: "config-data") pod "3c2c7b5d-d778-4d96-a6fb-171203f594d8" (UID: "3c2c7b5d-d778-4d96-a6fb-171203f594d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.520495 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58chh\" (UniqueName: \"kubernetes.io/projected/3240ff52-33fc-4027-a9ea-f3e17780b320-kube-api-access-58chh\") pod \"3240ff52-33fc-4027-a9ea-f3e17780b320\" (UID: \"3240ff52-33fc-4027-a9ea-f3e17780b320\") " Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.520537 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-ovsdbserver-sb\") pod \"3240ff52-33fc-4027-a9ea-f3e17780b320\" (UID: \"3240ff52-33fc-4027-a9ea-f3e17780b320\") " Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.520660 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-ovsdbserver-nb\") pod \"3240ff52-33fc-4027-a9ea-f3e17780b320\" (UID: \"3240ff52-33fc-4027-a9ea-f3e17780b320\") " Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.520764 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-dns-svc\") pod \"3240ff52-33fc-4027-a9ea-f3e17780b320\" (UID: \"3240ff52-33fc-4027-a9ea-f3e17780b320\") " Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.520827 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-config\") pod \"3240ff52-33fc-4027-a9ea-f3e17780b320\" (UID: \"3240ff52-33fc-4027-a9ea-f3e17780b320\") " Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.520869 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-dns-swift-storage-0\") pod \"3240ff52-33fc-4027-a9ea-f3e17780b320\" (UID: \"3240ff52-33fc-4027-a9ea-f3e17780b320\") " Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.521229 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c2c7b5d-d778-4d96-a6fb-171203f594d8-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.523781 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3240ff52-33fc-4027-a9ea-f3e17780b320-kube-api-access-58chh" (OuterVolumeSpecName: "kube-api-access-58chh") pod "3240ff52-33fc-4027-a9ea-f3e17780b320" (UID: "3240ff52-33fc-4027-a9ea-f3e17780b320"). InnerVolumeSpecName "kube-api-access-58chh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.548369 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lp5x6" event={"ID":"81e1c9f9-6a89-4ff4-8075-7d737bd42ec5","Type":"ContainerDied","Data":"c8715020ce34b931784b2532e7debf2dbf675e9e8f1e875050ed3f2b0e98e7d6"} Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.548410 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8715020ce34b931784b2532e7debf2dbf675e9e8f1e875050ed3f2b0e98e7d6" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.548463 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lp5x6" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.551309 5014 generic.go:334] "Generic (PLEG): container finished" podID="3240ff52-33fc-4027-a9ea-f3e17780b320" containerID="129dec289e5b998d8987fabb65bda9b643da0fa333c464af7bd11e43f49b7fa3" exitCode=0 Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.551573 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.551653 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" event={"ID":"3240ff52-33fc-4027-a9ea-f3e17780b320","Type":"ContainerDied","Data":"129dec289e5b998d8987fabb65bda9b643da0fa333c464af7bd11e43f49b7fa3"} Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.551705 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-zpvkp" event={"ID":"3240ff52-33fc-4027-a9ea-f3e17780b320","Type":"ContainerDied","Data":"3f89b8eb178965465502cc97c18d675da85aad21aab3a5856a9565eabc9162cc"} Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.551731 5014 scope.go:117] "RemoveContainer" containerID="129dec289e5b998d8987fabb65bda9b643da0fa333c464af7bd11e43f49b7fa3" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.557004 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c82cz" event={"ID":"3c2c7b5d-d778-4d96-a6fb-171203f594d8","Type":"ContainerDied","Data":"46778856ad3102e25d0220e3fb635e03265f251baa08a1a6a6e77f4fc9c93374"} Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.557072 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46778856ad3102e25d0220e3fb635e03265f251baa08a1a6a6e77f4fc9c93374" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.557037 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c82cz" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.576358 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3240ff52-33fc-4027-a9ea-f3e17780b320" (UID: "3240ff52-33fc-4027-a9ea-f3e17780b320"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.577478 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3240ff52-33fc-4027-a9ea-f3e17780b320" (UID: "3240ff52-33fc-4027-a9ea-f3e17780b320"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.590422 5014 scope.go:117] "RemoveContainer" containerID="25c781fbb0babf95b6fc112d6b43cc4a543b6dfd3421cdc41890b293fbd0e486" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.596927 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.609609 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3240ff52-33fc-4027-a9ea-f3e17780b320" (UID: "3240ff52-33fc-4027-a9ea-f3e17780b320"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.628424 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-config" (OuterVolumeSpecName: "config") pod "3240ff52-33fc-4027-a9ea-f3e17780b320" (UID: "3240ff52-33fc-4027-a9ea-f3e17780b320"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.653827 5014 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.653867 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.653888 5014 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.653904 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58chh\" (UniqueName: \"kubernetes.io/projected/3240ff52-33fc-4027-a9ea-f3e17780b320-kube-api-access-58chh\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.653917 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.661801 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3240ff52-33fc-4027-a9ea-f3e17780b320" (UID: "3240ff52-33fc-4027-a9ea-f3e17780b320"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.672110 5014 scope.go:117] "RemoveContainer" containerID="129dec289e5b998d8987fabb65bda9b643da0fa333c464af7bd11e43f49b7fa3" Feb 28 04:55:32 crc kubenswrapper[5014]: E0228 04:55:32.673299 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"129dec289e5b998d8987fabb65bda9b643da0fa333c464af7bd11e43f49b7fa3\": container with ID starting with 129dec289e5b998d8987fabb65bda9b643da0fa333c464af7bd11e43f49b7fa3 not found: ID does not exist" containerID="129dec289e5b998d8987fabb65bda9b643da0fa333c464af7bd11e43f49b7fa3" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.673330 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"129dec289e5b998d8987fabb65bda9b643da0fa333c464af7bd11e43f49b7fa3"} err="failed to get container status \"129dec289e5b998d8987fabb65bda9b643da0fa333c464af7bd11e43f49b7fa3\": rpc error: code = NotFound desc = could not find container \"129dec289e5b998d8987fabb65bda9b643da0fa333c464af7bd11e43f49b7fa3\": container with ID starting with 129dec289e5b998d8987fabb65bda9b643da0fa333c464af7bd11e43f49b7fa3 not found: ID does not exist" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.673351 5014 scope.go:117] "RemoveContainer" containerID="25c781fbb0babf95b6fc112d6b43cc4a543b6dfd3421cdc41890b293fbd0e486" Feb 28 04:55:32 crc kubenswrapper[5014]: E0228 04:55:32.673569 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25c781fbb0babf95b6fc112d6b43cc4a543b6dfd3421cdc41890b293fbd0e486\": container with ID starting with 25c781fbb0babf95b6fc112d6b43cc4a543b6dfd3421cdc41890b293fbd0e486 not found: ID does not exist" containerID="25c781fbb0babf95b6fc112d6b43cc4a543b6dfd3421cdc41890b293fbd0e486" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.673590 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25c781fbb0babf95b6fc112d6b43cc4a543b6dfd3421cdc41890b293fbd0e486"} err="failed to get container status \"25c781fbb0babf95b6fc112d6b43cc4a543b6dfd3421cdc41890b293fbd0e486\": rpc error: code = NotFound desc = could not find container \"25c781fbb0babf95b6fc112d6b43cc4a543b6dfd3421cdc41890b293fbd0e486\": container with ID starting with 25c781fbb0babf95b6fc112d6b43cc4a543b6dfd3421cdc41890b293fbd0e486 not found: ID does not exist" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.675461 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 28 04:55:32 crc kubenswrapper[5014]: E0228 04:55:32.675918 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81e1c9f9-6a89-4ff4-8075-7d737bd42ec5" containerName="nova-cell1-conductor-db-sync" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.675938 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="81e1c9f9-6a89-4ff4-8075-7d737bd42ec5" containerName="nova-cell1-conductor-db-sync" Feb 28 04:55:32 crc kubenswrapper[5014]: E0228 04:55:32.675958 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3240ff52-33fc-4027-a9ea-f3e17780b320" containerName="init" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.675966 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="3240ff52-33fc-4027-a9ea-f3e17780b320" containerName="init" Feb 28 04:55:32 crc kubenswrapper[5014]: E0228 04:55:32.675990 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c2c7b5d-d778-4d96-a6fb-171203f594d8" containerName="nova-manage" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.676003 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c2c7b5d-d778-4d96-a6fb-171203f594d8" containerName="nova-manage" Feb 28 04:55:32 crc kubenswrapper[5014]: E0228 04:55:32.676017 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3240ff52-33fc-4027-a9ea-f3e17780b320" containerName="dnsmasq-dns" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.676023 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="3240ff52-33fc-4027-a9ea-f3e17780b320" containerName="dnsmasq-dns" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.676222 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c2c7b5d-d778-4d96-a6fb-171203f594d8" containerName="nova-manage" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.676242 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="3240ff52-33fc-4027-a9ea-f3e17780b320" containerName="dnsmasq-dns" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.676251 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="81e1c9f9-6a89-4ff4-8075-7d737bd42ec5" containerName="nova-cell1-conductor-db-sync" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.676957 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.679035 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.692586 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.755328 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3240ff52-33fc-4027-a9ea-f3e17780b320-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.800651 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.800911 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="571ccc83-9293-4ac8-bc08-6b659925845e" containerName="nova-api-log" containerID="cri-o://a32e77190dcee6357cc24518590a684cc445a736e5d603448cf1d2e7f3ea4c94" gracePeriod=30 Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.802931 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="571ccc83-9293-4ac8-bc08-6b659925845e" containerName="nova-api-api" containerID="cri-o://adae4e3669d5239495a5201e157cd64b3ec98d23e1e520ee7bee8a0c91fe1017" gracePeriod=30 Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.817494 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="571ccc83-9293-4ac8-bc08-6b659925845e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": EOF" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.817494 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="571ccc83-9293-4ac8-bc08-6b659925845e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": EOF" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.847889 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.848311 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d6cab9e9-ac57-4d68-9276-707427d9e517" containerName="nova-metadata-log" containerID="cri-o://c0ba6f265e02ad1f90e34890082c3ed34c3678362f26cb9251e8937d84ead157" gracePeriod=30 Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.850308 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d6cab9e9-ac57-4d68-9276-707427d9e517" containerName="nova-metadata-metadata" containerID="cri-o://05c4bf6de818129372ce936e9eb8f6dfe6e9060819ccef6cc8081b761cb4b111" gracePeriod=30 Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.856692 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01377f7d-9edd-424c-b22e-42fde4e51e95-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"01377f7d-9edd-424c-b22e-42fde4e51e95\") " pod="openstack/nova-cell1-conductor-0" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.856862 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01377f7d-9edd-424c-b22e-42fde4e51e95-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"01377f7d-9edd-424c-b22e-42fde4e51e95\") " pod="openstack/nova-cell1-conductor-0" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.856898 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4gwp\" (UniqueName: \"kubernetes.io/projected/01377f7d-9edd-424c-b22e-42fde4e51e95-kube-api-access-f4gwp\") pod \"nova-cell1-conductor-0\" (UID: \"01377f7d-9edd-424c-b22e-42fde4e51e95\") " pod="openstack/nova-cell1-conductor-0" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.926642 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-zpvkp"] Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.938343 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-zpvkp"] Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.958374 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01377f7d-9edd-424c-b22e-42fde4e51e95-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"01377f7d-9edd-424c-b22e-42fde4e51e95\") " pod="openstack/nova-cell1-conductor-0" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.958439 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4gwp\" (UniqueName: \"kubernetes.io/projected/01377f7d-9edd-424c-b22e-42fde4e51e95-kube-api-access-f4gwp\") pod \"nova-cell1-conductor-0\" (UID: \"01377f7d-9edd-424c-b22e-42fde4e51e95\") " pod="openstack/nova-cell1-conductor-0" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.958508 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01377f7d-9edd-424c-b22e-42fde4e51e95-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"01377f7d-9edd-424c-b22e-42fde4e51e95\") " pod="openstack/nova-cell1-conductor-0" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.963597 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01377f7d-9edd-424c-b22e-42fde4e51e95-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"01377f7d-9edd-424c-b22e-42fde4e51e95\") " pod="openstack/nova-cell1-conductor-0" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.966579 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/01377f7d-9edd-424c-b22e-42fde4e51e95-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"01377f7d-9edd-424c-b22e-42fde4e51e95\") " pod="openstack/nova-cell1-conductor-0" Feb 28 04:55:32 crc kubenswrapper[5014]: I0228 04:55:32.976502 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4gwp\" (UniqueName: \"kubernetes.io/projected/01377f7d-9edd-424c-b22e-42fde4e51e95-kube-api-access-f4gwp\") pod \"nova-cell1-conductor-0\" (UID: \"01377f7d-9edd-424c-b22e-42fde4e51e95\") " pod="openstack/nova-cell1-conductor-0" Feb 28 04:55:33 crc kubenswrapper[5014]: I0228 04:55:33.000789 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 28 04:55:33 crc kubenswrapper[5014]: I0228 04:55:33.199609 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 04:55:33 crc kubenswrapper[5014]: I0228 04:55:33.254012 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 04:55:33 crc kubenswrapper[5014]: I0228 04:55:33.254070 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 04:55:33 crc kubenswrapper[5014]: I0228 04:55:33.475487 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 28 04:55:33 crc kubenswrapper[5014]: I0228 04:55:33.569897 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"01377f7d-9edd-424c-b22e-42fde4e51e95","Type":"ContainerStarted","Data":"da08c40a021c96dcf532b48def344e1a3c56be2731463e2f899488b14a858ccb"} Feb 28 04:55:33 crc kubenswrapper[5014]: I0228 04:55:33.575060 5014 generic.go:334] "Generic (PLEG): container finished" podID="d6cab9e9-ac57-4d68-9276-707427d9e517" containerID="05c4bf6de818129372ce936e9eb8f6dfe6e9060819ccef6cc8081b761cb4b111" exitCode=0 Feb 28 04:55:33 crc kubenswrapper[5014]: I0228 04:55:33.575100 5014 generic.go:334] "Generic (PLEG): container finished" podID="d6cab9e9-ac57-4d68-9276-707427d9e517" containerID="c0ba6f265e02ad1f90e34890082c3ed34c3678362f26cb9251e8937d84ead157" exitCode=143 Feb 28 04:55:33 crc kubenswrapper[5014]: I0228 04:55:33.575133 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d6cab9e9-ac57-4d68-9276-707427d9e517","Type":"ContainerDied","Data":"05c4bf6de818129372ce936e9eb8f6dfe6e9060819ccef6cc8081b761cb4b111"} Feb 28 04:55:33 crc kubenswrapper[5014]: I0228 04:55:33.575213 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d6cab9e9-ac57-4d68-9276-707427d9e517","Type":"ContainerDied","Data":"c0ba6f265e02ad1f90e34890082c3ed34c3678362f26cb9251e8937d84ead157"} Feb 28 04:55:33 crc kubenswrapper[5014]: I0228 04:55:33.581993 5014 generic.go:334] "Generic (PLEG): container finished" podID="571ccc83-9293-4ac8-bc08-6b659925845e" containerID="a32e77190dcee6357cc24518590a684cc445a736e5d603448cf1d2e7f3ea4c94" exitCode=143 Feb 28 04:55:33 crc kubenswrapper[5014]: I0228 04:55:33.582943 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"571ccc83-9293-4ac8-bc08-6b659925845e","Type":"ContainerDied","Data":"a32e77190dcee6357cc24518590a684cc445a736e5d603448cf1d2e7f3ea4c94"} Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.009229 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.186449 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6cab9e9-ac57-4d68-9276-707427d9e517-config-data\") pod \"d6cab9e9-ac57-4d68-9276-707427d9e517\" (UID: \"d6cab9e9-ac57-4d68-9276-707427d9e517\") " Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.186529 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6cab9e9-ac57-4d68-9276-707427d9e517-combined-ca-bundle\") pod \"d6cab9e9-ac57-4d68-9276-707427d9e517\" (UID: \"d6cab9e9-ac57-4d68-9276-707427d9e517\") " Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.186588 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6cab9e9-ac57-4d68-9276-707427d9e517-logs\") pod \"d6cab9e9-ac57-4d68-9276-707427d9e517\" (UID: \"d6cab9e9-ac57-4d68-9276-707427d9e517\") " Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.186622 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c59s5\" (UniqueName: \"kubernetes.io/projected/d6cab9e9-ac57-4d68-9276-707427d9e517-kube-api-access-c59s5\") pod \"d6cab9e9-ac57-4d68-9276-707427d9e517\" (UID: \"d6cab9e9-ac57-4d68-9276-707427d9e517\") " Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.186697 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6cab9e9-ac57-4d68-9276-707427d9e517-nova-metadata-tls-certs\") pod \"d6cab9e9-ac57-4d68-9276-707427d9e517\" (UID: \"d6cab9e9-ac57-4d68-9276-707427d9e517\") " Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.194178 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3240ff52-33fc-4027-a9ea-f3e17780b320" path="/var/lib/kubelet/pods/3240ff52-33fc-4027-a9ea-f3e17780b320/volumes" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.199119 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6cab9e9-ac57-4d68-9276-707427d9e517-logs" (OuterVolumeSpecName: "logs") pod "d6cab9e9-ac57-4d68-9276-707427d9e517" (UID: "d6cab9e9-ac57-4d68-9276-707427d9e517"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.214989 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6cab9e9-ac57-4d68-9276-707427d9e517-kube-api-access-c59s5" (OuterVolumeSpecName: "kube-api-access-c59s5") pod "d6cab9e9-ac57-4d68-9276-707427d9e517" (UID: "d6cab9e9-ac57-4d68-9276-707427d9e517"). InnerVolumeSpecName "kube-api-access-c59s5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.230936 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6cab9e9-ac57-4d68-9276-707427d9e517-config-data" (OuterVolumeSpecName: "config-data") pod "d6cab9e9-ac57-4d68-9276-707427d9e517" (UID: "d6cab9e9-ac57-4d68-9276-707427d9e517"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.250960 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6cab9e9-ac57-4d68-9276-707427d9e517-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "d6cab9e9-ac57-4d68-9276-707427d9e517" (UID: "d6cab9e9-ac57-4d68-9276-707427d9e517"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.269082 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6cab9e9-ac57-4d68-9276-707427d9e517-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6cab9e9-ac57-4d68-9276-707427d9e517" (UID: "d6cab9e9-ac57-4d68-9276-707427d9e517"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.291926 5014 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6cab9e9-ac57-4d68-9276-707427d9e517-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.291974 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6cab9e9-ac57-4d68-9276-707427d9e517-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.291985 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6cab9e9-ac57-4d68-9276-707427d9e517-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.292002 5014 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6cab9e9-ac57-4d68-9276-707427d9e517-logs\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.292018 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c59s5\" (UniqueName: \"kubernetes.io/projected/d6cab9e9-ac57-4d68-9276-707427d9e517-kube-api-access-c59s5\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.591381 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"01377f7d-9edd-424c-b22e-42fde4e51e95","Type":"ContainerStarted","Data":"0044db6eada4412da16671e155ef3d1d4a86934350523475d9d45fc27f21bb33"} Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.592143 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.594153 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c50de725-9c5d-4801-8163-c4382a024617" containerName="nova-scheduler-scheduler" containerID="cri-o://be4229f34fdc880a55ee26d5c07982d79b21da3608629f5934b146b0b47f97d5" gracePeriod=30 Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.594453 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.606217 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d6cab9e9-ac57-4d68-9276-707427d9e517","Type":"ContainerDied","Data":"bc45c4757025e675a7eaacd44e9d1381b1ddc007472f307bfbded655a43e0e66"} Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.606302 5014 scope.go:117] "RemoveContainer" containerID="05c4bf6de818129372ce936e9eb8f6dfe6e9060819ccef6cc8081b761cb4b111" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.634475 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.6344539019999997 podStartE2EDuration="2.634453902s" podCreationTimestamp="2026-02-28 04:55:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:55:34.625289695 +0000 UTC m=+1323.295415595" watchObservedRunningTime="2026-02-28 04:55:34.634453902 +0000 UTC m=+1323.304579822" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.671497 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.686780 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.687001 5014 scope.go:117] "RemoveContainer" containerID="c0ba6f265e02ad1f90e34890082c3ed34c3678362f26cb9251e8937d84ead157" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.704584 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 28 04:55:34 crc kubenswrapper[5014]: E0228 04:55:34.704996 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cab9e9-ac57-4d68-9276-707427d9e517" containerName="nova-metadata-metadata" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.705012 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cab9e9-ac57-4d68-9276-707427d9e517" containerName="nova-metadata-metadata" Feb 28 04:55:34 crc kubenswrapper[5014]: E0228 04:55:34.705029 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6cab9e9-ac57-4d68-9276-707427d9e517" containerName="nova-metadata-log" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.705037 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6cab9e9-ac57-4d68-9276-707427d9e517" containerName="nova-metadata-log" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.705200 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6cab9e9-ac57-4d68-9276-707427d9e517" containerName="nova-metadata-metadata" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.705211 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6cab9e9-ac57-4d68-9276-707427d9e517" containerName="nova-metadata-log" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.706151 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.709647 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.709894 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.716934 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.812716 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4l58l\" (UniqueName: \"kubernetes.io/projected/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-kube-api-access-4l58l\") pod \"nova-metadata-0\" (UID: \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\") " pod="openstack/nova-metadata-0" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.812767 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\") " pod="openstack/nova-metadata-0" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.812821 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-config-data\") pod \"nova-metadata-0\" (UID: \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\") " pod="openstack/nova-metadata-0" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.812901 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-logs\") pod \"nova-metadata-0\" (UID: \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\") " pod="openstack/nova-metadata-0" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.812963 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\") " pod="openstack/nova-metadata-0" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.914610 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\") " pod="openstack/nova-metadata-0" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.914675 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4l58l\" (UniqueName: \"kubernetes.io/projected/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-kube-api-access-4l58l\") pod \"nova-metadata-0\" (UID: \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\") " pod="openstack/nova-metadata-0" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.914704 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\") " pod="openstack/nova-metadata-0" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.914734 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-config-data\") pod \"nova-metadata-0\" (UID: \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\") " pod="openstack/nova-metadata-0" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.914822 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-logs\") pod \"nova-metadata-0\" (UID: \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\") " pod="openstack/nova-metadata-0" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.915221 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-logs\") pod \"nova-metadata-0\" (UID: \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\") " pod="openstack/nova-metadata-0" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.921330 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\") " pod="openstack/nova-metadata-0" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.921605 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-config-data\") pod \"nova-metadata-0\" (UID: \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\") " pod="openstack/nova-metadata-0" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.926343 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\") " pod="openstack/nova-metadata-0" Feb 28 04:55:34 crc kubenswrapper[5014]: I0228 04:55:34.943957 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4l58l\" (UniqueName: \"kubernetes.io/projected/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-kube-api-access-4l58l\") pod \"nova-metadata-0\" (UID: \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\") " pod="openstack/nova-metadata-0" Feb 28 04:55:35 crc kubenswrapper[5014]: I0228 04:55:35.029177 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 04:55:35 crc kubenswrapper[5014]: I0228 04:55:35.552423 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 04:55:35 crc kubenswrapper[5014]: I0228 04:55:35.603621 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6","Type":"ContainerStarted","Data":"fe0081b3872eee7c96ff9fc7e87a7764e71d89f691869e2286eae4a280eba09a"} Feb 28 04:55:36 crc kubenswrapper[5014]: I0228 04:55:36.180745 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6cab9e9-ac57-4d68-9276-707427d9e517" path="/var/lib/kubelet/pods/d6cab9e9-ac57-4d68-9276-707427d9e517/volumes" Feb 28 04:55:36 crc kubenswrapper[5014]: I0228 04:55:36.615870 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6","Type":"ContainerStarted","Data":"5c8ee2339b57f90aaa1826f2d6799cb71d83820646554e51685ea9c951f7bddb"} Feb 28 04:55:36 crc kubenswrapper[5014]: I0228 04:55:36.615915 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6","Type":"ContainerStarted","Data":"98fe048c4c0aba06e2bb403dcf4a5832780969c4662afdedee99c2393b86aa6a"} Feb 28 04:55:37 crc kubenswrapper[5014]: E0228 04:55:37.064954 5014 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="be4229f34fdc880a55ee26d5c07982d79b21da3608629f5934b146b0b47f97d5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 28 04:55:37 crc kubenswrapper[5014]: E0228 04:55:37.073105 5014 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="be4229f34fdc880a55ee26d5c07982d79b21da3608629f5934b146b0b47f97d5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 28 04:55:37 crc kubenswrapper[5014]: E0228 04:55:37.076026 5014 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="be4229f34fdc880a55ee26d5c07982d79b21da3608629f5934b146b0b47f97d5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 28 04:55:37 crc kubenswrapper[5014]: E0228 04:55:37.076095 5014 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c50de725-9c5d-4801-8163-c4382a024617" containerName="nova-scheduler-scheduler" Feb 28 04:55:37 crc kubenswrapper[5014]: I0228 04:55:37.634198 5014 generic.go:334] "Generic (PLEG): container finished" podID="c50de725-9c5d-4801-8163-c4382a024617" containerID="be4229f34fdc880a55ee26d5c07982d79b21da3608629f5934b146b0b47f97d5" exitCode=0 Feb 28 04:55:37 crc kubenswrapper[5014]: I0228 04:55:37.634289 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c50de725-9c5d-4801-8163-c4382a024617","Type":"ContainerDied","Data":"be4229f34fdc880a55ee26d5c07982d79b21da3608629f5934b146b0b47f97d5"} Feb 28 04:55:37 crc kubenswrapper[5014]: I0228 04:55:37.725066 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 04:55:37 crc kubenswrapper[5014]: I0228 04:55:37.740082 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.740061011 podStartE2EDuration="3.740061011s" podCreationTimestamp="2026-02-28 04:55:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:55:36.655396524 +0000 UTC m=+1325.325522434" watchObservedRunningTime="2026-02-28 04:55:37.740061011 +0000 UTC m=+1326.410186921" Feb 28 04:55:37 crc kubenswrapper[5014]: I0228 04:55:37.871074 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhtxp\" (UniqueName: \"kubernetes.io/projected/c50de725-9c5d-4801-8163-c4382a024617-kube-api-access-bhtxp\") pod \"c50de725-9c5d-4801-8163-c4382a024617\" (UID: \"c50de725-9c5d-4801-8163-c4382a024617\") " Feb 28 04:55:37 crc kubenswrapper[5014]: I0228 04:55:37.871420 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c50de725-9c5d-4801-8163-c4382a024617-combined-ca-bundle\") pod \"c50de725-9c5d-4801-8163-c4382a024617\" (UID: \"c50de725-9c5d-4801-8163-c4382a024617\") " Feb 28 04:55:37 crc kubenswrapper[5014]: I0228 04:55:37.871603 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c50de725-9c5d-4801-8163-c4382a024617-config-data\") pod \"c50de725-9c5d-4801-8163-c4382a024617\" (UID: \"c50de725-9c5d-4801-8163-c4382a024617\") " Feb 28 04:55:37 crc kubenswrapper[5014]: I0228 04:55:37.877008 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c50de725-9c5d-4801-8163-c4382a024617-kube-api-access-bhtxp" (OuterVolumeSpecName: "kube-api-access-bhtxp") pod "c50de725-9c5d-4801-8163-c4382a024617" (UID: "c50de725-9c5d-4801-8163-c4382a024617"). InnerVolumeSpecName "kube-api-access-bhtxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:55:37 crc kubenswrapper[5014]: I0228 04:55:37.909352 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c50de725-9c5d-4801-8163-c4382a024617-config-data" (OuterVolumeSpecName: "config-data") pod "c50de725-9c5d-4801-8163-c4382a024617" (UID: "c50de725-9c5d-4801-8163-c4382a024617"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:37 crc kubenswrapper[5014]: I0228 04:55:37.929291 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c50de725-9c5d-4801-8163-c4382a024617-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c50de725-9c5d-4801-8163-c4382a024617" (UID: "c50de725-9c5d-4801-8163-c4382a024617"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:37 crc kubenswrapper[5014]: I0228 04:55:37.973764 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c50de725-9c5d-4801-8163-c4382a024617-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:37 crc kubenswrapper[5014]: I0228 04:55:37.973794 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhtxp\" (UniqueName: \"kubernetes.io/projected/c50de725-9c5d-4801-8163-c4382a024617-kube-api-access-bhtxp\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:37 crc kubenswrapper[5014]: I0228 04:55:37.973803 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c50de725-9c5d-4801-8163-c4382a024617-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.043307 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.654053 5014 generic.go:334] "Generic (PLEG): container finished" podID="571ccc83-9293-4ac8-bc08-6b659925845e" containerID="adae4e3669d5239495a5201e157cd64b3ec98d23e1e520ee7bee8a0c91fe1017" exitCode=0 Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.654161 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"571ccc83-9293-4ac8-bc08-6b659925845e","Type":"ContainerDied","Data":"adae4e3669d5239495a5201e157cd64b3ec98d23e1e520ee7bee8a0c91fe1017"} Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.654242 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"571ccc83-9293-4ac8-bc08-6b659925845e","Type":"ContainerDied","Data":"5bb4931e488e89975aa902c3f6751df7ba7954c1a0b4839faa19174e1fb5cd06"} Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.654263 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bb4931e488e89975aa902c3f6751df7ba7954c1a0b4839faa19174e1fb5cd06" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.655618 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c50de725-9c5d-4801-8163-c4382a024617","Type":"ContainerDied","Data":"048c82af309e1a5a10a4dd6f1ba2dcab6a8991b5cbf1c9c0cd488a2dc1bfb597"} Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.655664 5014 scope.go:117] "RemoveContainer" containerID="be4229f34fdc880a55ee26d5c07982d79b21da3608629f5934b146b0b47f97d5" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.655792 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.680658 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.696384 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.716918 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.744389 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 04:55:38 crc kubenswrapper[5014]: E0228 04:55:38.757234 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="571ccc83-9293-4ac8-bc08-6b659925845e" containerName="nova-api-api" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.757289 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="571ccc83-9293-4ac8-bc08-6b659925845e" containerName="nova-api-api" Feb 28 04:55:38 crc kubenswrapper[5014]: E0228 04:55:38.757323 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c50de725-9c5d-4801-8163-c4382a024617" containerName="nova-scheduler-scheduler" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.757337 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="c50de725-9c5d-4801-8163-c4382a024617" containerName="nova-scheduler-scheduler" Feb 28 04:55:38 crc kubenswrapper[5014]: E0228 04:55:38.757368 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="571ccc83-9293-4ac8-bc08-6b659925845e" containerName="nova-api-log" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.757378 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="571ccc83-9293-4ac8-bc08-6b659925845e" containerName="nova-api-log" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.757704 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="571ccc83-9293-4ac8-bc08-6b659925845e" containerName="nova-api-log" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.757739 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="c50de725-9c5d-4801-8163-c4382a024617" containerName="nova-scheduler-scheduler" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.757760 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="571ccc83-9293-4ac8-bc08-6b659925845e" containerName="nova-api-api" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.758646 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.762115 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.776335 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.786163 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/571ccc83-9293-4ac8-bc08-6b659925845e-logs\") pod \"571ccc83-9293-4ac8-bc08-6b659925845e\" (UID: \"571ccc83-9293-4ac8-bc08-6b659925845e\") " Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.786346 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q54t2\" (UniqueName: \"kubernetes.io/projected/571ccc83-9293-4ac8-bc08-6b659925845e-kube-api-access-q54t2\") pod \"571ccc83-9293-4ac8-bc08-6b659925845e\" (UID: \"571ccc83-9293-4ac8-bc08-6b659925845e\") " Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.786633 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/571ccc83-9293-4ac8-bc08-6b659925845e-combined-ca-bundle\") pod \"571ccc83-9293-4ac8-bc08-6b659925845e\" (UID: \"571ccc83-9293-4ac8-bc08-6b659925845e\") " Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.786773 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/571ccc83-9293-4ac8-bc08-6b659925845e-config-data\") pod \"571ccc83-9293-4ac8-bc08-6b659925845e\" (UID: \"571ccc83-9293-4ac8-bc08-6b659925845e\") " Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.790618 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/571ccc83-9293-4ac8-bc08-6b659925845e-logs" (OuterVolumeSpecName: "logs") pod "571ccc83-9293-4ac8-bc08-6b659925845e" (UID: "571ccc83-9293-4ac8-bc08-6b659925845e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.793582 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/571ccc83-9293-4ac8-bc08-6b659925845e-kube-api-access-q54t2" (OuterVolumeSpecName: "kube-api-access-q54t2") pod "571ccc83-9293-4ac8-bc08-6b659925845e" (UID: "571ccc83-9293-4ac8-bc08-6b659925845e"). InnerVolumeSpecName "kube-api-access-q54t2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.816560 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/571ccc83-9293-4ac8-bc08-6b659925845e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "571ccc83-9293-4ac8-bc08-6b659925845e" (UID: "571ccc83-9293-4ac8-bc08-6b659925845e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.821687 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/571ccc83-9293-4ac8-bc08-6b659925845e-config-data" (OuterVolumeSpecName: "config-data") pod "571ccc83-9293-4ac8-bc08-6b659925845e" (UID: "571ccc83-9293-4ac8-bc08-6b659925845e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.890601 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b76306f-13bd-4df4-a8a2-d3a1eede7020-config-data\") pod \"nova-scheduler-0\" (UID: \"2b76306f-13bd-4df4-a8a2-d3a1eede7020\") " pod="openstack/nova-scheduler-0" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.890839 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-274cn\" (UniqueName: \"kubernetes.io/projected/2b76306f-13bd-4df4-a8a2-d3a1eede7020-kube-api-access-274cn\") pod \"nova-scheduler-0\" (UID: \"2b76306f-13bd-4df4-a8a2-d3a1eede7020\") " pod="openstack/nova-scheduler-0" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.891395 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b76306f-13bd-4df4-a8a2-d3a1eede7020-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2b76306f-13bd-4df4-a8a2-d3a1eede7020\") " pod="openstack/nova-scheduler-0" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.891546 5014 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/571ccc83-9293-4ac8-bc08-6b659925845e-logs\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.891579 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q54t2\" (UniqueName: \"kubernetes.io/projected/571ccc83-9293-4ac8-bc08-6b659925845e-kube-api-access-q54t2\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.891676 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/571ccc83-9293-4ac8-bc08-6b659925845e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.891699 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/571ccc83-9293-4ac8-bc08-6b659925845e-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.993292 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b76306f-13bd-4df4-a8a2-d3a1eede7020-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2b76306f-13bd-4df4-a8a2-d3a1eede7020\") " pod="openstack/nova-scheduler-0" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.993572 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b76306f-13bd-4df4-a8a2-d3a1eede7020-config-data\") pod \"nova-scheduler-0\" (UID: \"2b76306f-13bd-4df4-a8a2-d3a1eede7020\") " pod="openstack/nova-scheduler-0" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.993717 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-274cn\" (UniqueName: \"kubernetes.io/projected/2b76306f-13bd-4df4-a8a2-d3a1eede7020-kube-api-access-274cn\") pod \"nova-scheduler-0\" (UID: \"2b76306f-13bd-4df4-a8a2-d3a1eede7020\") " pod="openstack/nova-scheduler-0" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.996994 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b76306f-13bd-4df4-a8a2-d3a1eede7020-config-data\") pod \"nova-scheduler-0\" (UID: \"2b76306f-13bd-4df4-a8a2-d3a1eede7020\") " pod="openstack/nova-scheduler-0" Feb 28 04:55:38 crc kubenswrapper[5014]: I0228 04:55:38.997558 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b76306f-13bd-4df4-a8a2-d3a1eede7020-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"2b76306f-13bd-4df4-a8a2-d3a1eede7020\") " pod="openstack/nova-scheduler-0" Feb 28 04:55:39 crc kubenswrapper[5014]: I0228 04:55:39.011634 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-274cn\" (UniqueName: \"kubernetes.io/projected/2b76306f-13bd-4df4-a8a2-d3a1eede7020-kube-api-access-274cn\") pod \"nova-scheduler-0\" (UID: \"2b76306f-13bd-4df4-a8a2-d3a1eede7020\") " pod="openstack/nova-scheduler-0" Feb 28 04:55:39 crc kubenswrapper[5014]: I0228 04:55:39.081262 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 04:55:39 crc kubenswrapper[5014]: I0228 04:55:39.569206 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 04:55:39 crc kubenswrapper[5014]: I0228 04:55:39.682390 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2b76306f-13bd-4df4-a8a2-d3a1eede7020","Type":"ContainerStarted","Data":"928becd321eb577895bfc83ffd3decab53001d6b9cb8e5ae4fcadf6d9d690300"} Feb 28 04:55:39 crc kubenswrapper[5014]: I0228 04:55:39.684991 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 04:55:39 crc kubenswrapper[5014]: I0228 04:55:39.734756 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 28 04:55:39 crc kubenswrapper[5014]: I0228 04:55:39.745846 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 28 04:55:39 crc kubenswrapper[5014]: I0228 04:55:39.759922 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 28 04:55:39 crc kubenswrapper[5014]: I0228 04:55:39.761606 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 04:55:39 crc kubenswrapper[5014]: I0228 04:55:39.764692 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 28 04:55:39 crc kubenswrapper[5014]: I0228 04:55:39.778211 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 04:55:39 crc kubenswrapper[5014]: I0228 04:55:39.933999 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb100331-cd16-4875-8529-b7e34aaa385e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eb100331-cd16-4875-8529-b7e34aaa385e\") " pod="openstack/nova-api-0" Feb 28 04:55:39 crc kubenswrapper[5014]: I0228 04:55:39.934315 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56sfx\" (UniqueName: \"kubernetes.io/projected/eb100331-cd16-4875-8529-b7e34aaa385e-kube-api-access-56sfx\") pod \"nova-api-0\" (UID: \"eb100331-cd16-4875-8529-b7e34aaa385e\") " pod="openstack/nova-api-0" Feb 28 04:55:39 crc kubenswrapper[5014]: I0228 04:55:39.934390 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb100331-cd16-4875-8529-b7e34aaa385e-config-data\") pod \"nova-api-0\" (UID: \"eb100331-cd16-4875-8529-b7e34aaa385e\") " pod="openstack/nova-api-0" Feb 28 04:55:39 crc kubenswrapper[5014]: I0228 04:55:39.934442 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb100331-cd16-4875-8529-b7e34aaa385e-logs\") pod \"nova-api-0\" (UID: \"eb100331-cd16-4875-8529-b7e34aaa385e\") " pod="openstack/nova-api-0" Feb 28 04:55:40 crc kubenswrapper[5014]: I0228 04:55:40.030265 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 04:55:40 crc kubenswrapper[5014]: I0228 04:55:40.030383 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 04:55:40 crc kubenswrapper[5014]: I0228 04:55:40.036117 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56sfx\" (UniqueName: \"kubernetes.io/projected/eb100331-cd16-4875-8529-b7e34aaa385e-kube-api-access-56sfx\") pod \"nova-api-0\" (UID: \"eb100331-cd16-4875-8529-b7e34aaa385e\") " pod="openstack/nova-api-0" Feb 28 04:55:40 crc kubenswrapper[5014]: I0228 04:55:40.036207 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb100331-cd16-4875-8529-b7e34aaa385e-config-data\") pod \"nova-api-0\" (UID: \"eb100331-cd16-4875-8529-b7e34aaa385e\") " pod="openstack/nova-api-0" Feb 28 04:55:40 crc kubenswrapper[5014]: I0228 04:55:40.036266 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb100331-cd16-4875-8529-b7e34aaa385e-logs\") pod \"nova-api-0\" (UID: \"eb100331-cd16-4875-8529-b7e34aaa385e\") " pod="openstack/nova-api-0" Feb 28 04:55:40 crc kubenswrapper[5014]: I0228 04:55:40.036373 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb100331-cd16-4875-8529-b7e34aaa385e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eb100331-cd16-4875-8529-b7e34aaa385e\") " pod="openstack/nova-api-0" Feb 28 04:55:40 crc kubenswrapper[5014]: I0228 04:55:40.036711 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb100331-cd16-4875-8529-b7e34aaa385e-logs\") pod \"nova-api-0\" (UID: \"eb100331-cd16-4875-8529-b7e34aaa385e\") " pod="openstack/nova-api-0" Feb 28 04:55:40 crc kubenswrapper[5014]: I0228 04:55:40.041607 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb100331-cd16-4875-8529-b7e34aaa385e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"eb100331-cd16-4875-8529-b7e34aaa385e\") " pod="openstack/nova-api-0" Feb 28 04:55:40 crc kubenswrapper[5014]: I0228 04:55:40.041971 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb100331-cd16-4875-8529-b7e34aaa385e-config-data\") pod \"nova-api-0\" (UID: \"eb100331-cd16-4875-8529-b7e34aaa385e\") " pod="openstack/nova-api-0" Feb 28 04:55:40 crc kubenswrapper[5014]: I0228 04:55:40.054837 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56sfx\" (UniqueName: \"kubernetes.io/projected/eb100331-cd16-4875-8529-b7e34aaa385e-kube-api-access-56sfx\") pod \"nova-api-0\" (UID: \"eb100331-cd16-4875-8529-b7e34aaa385e\") " pod="openstack/nova-api-0" Feb 28 04:55:40 crc kubenswrapper[5014]: I0228 04:55:40.083878 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 04:55:40 crc kubenswrapper[5014]: I0228 04:55:40.198761 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="571ccc83-9293-4ac8-bc08-6b659925845e" path="/var/lib/kubelet/pods/571ccc83-9293-4ac8-bc08-6b659925845e/volumes" Feb 28 04:55:40 crc kubenswrapper[5014]: I0228 04:55:40.199751 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c50de725-9c5d-4801-8163-c4382a024617" path="/var/lib/kubelet/pods/c50de725-9c5d-4801-8163-c4382a024617/volumes" Feb 28 04:55:40 crc kubenswrapper[5014]: I0228 04:55:40.558897 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 04:55:40 crc kubenswrapper[5014]: I0228 04:55:40.695110 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2b76306f-13bd-4df4-a8a2-d3a1eede7020","Type":"ContainerStarted","Data":"3336e496711e2ad799de320a9cb7fcdfc3046fdddf17c6e42eb0becbe91e7955"} Feb 28 04:55:40 crc kubenswrapper[5014]: I0228 04:55:40.696321 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb100331-cd16-4875-8529-b7e34aaa385e","Type":"ContainerStarted","Data":"299f46e0e88bdbd9c441b934951e44a3dbb86b922b399fb7bd8e2ce60debcbbe"} Feb 28 04:55:40 crc kubenswrapper[5014]: I0228 04:55:40.717663 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.717637356 podStartE2EDuration="2.717637356s" podCreationTimestamp="2026-02-28 04:55:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:55:40.71257942 +0000 UTC m=+1329.382705330" watchObservedRunningTime="2026-02-28 04:55:40.717637356 +0000 UTC m=+1329.387763276" Feb 28 04:55:41 crc kubenswrapper[5014]: I0228 04:55:41.706910 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb100331-cd16-4875-8529-b7e34aaa385e","Type":"ContainerStarted","Data":"303fad5cd0690422998c89528c459e9eda4272bb4391c49db99a8934dd4c22dc"} Feb 28 04:55:41 crc kubenswrapper[5014]: I0228 04:55:41.707244 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb100331-cd16-4875-8529-b7e34aaa385e","Type":"ContainerStarted","Data":"93a97422b33866e5002f8f0552056eaed6681f073abab421f58b2c3e47585e74"} Feb 28 04:55:41 crc kubenswrapper[5014]: I0228 04:55:41.737848 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.737800712 podStartE2EDuration="2.737800712s" podCreationTimestamp="2026-02-28 04:55:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:55:41.734150404 +0000 UTC m=+1330.404276314" watchObservedRunningTime="2026-02-28 04:55:41.737800712 +0000 UTC m=+1330.407926642" Feb 28 04:55:43 crc kubenswrapper[5014]: I0228 04:55:43.607686 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 28 04:55:44 crc kubenswrapper[5014]: I0228 04:55:44.081933 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 28 04:55:45 crc kubenswrapper[5014]: I0228 04:55:45.030057 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 28 04:55:45 crc kubenswrapper[5014]: I0228 04:55:45.030770 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 28 04:55:46 crc kubenswrapper[5014]: I0228 04:55:46.047017 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 04:55:46 crc kubenswrapper[5014]: I0228 04:55:46.047098 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 04:55:47 crc kubenswrapper[5014]: I0228 04:55:47.325098 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 04:55:47 crc kubenswrapper[5014]: I0228 04:55:47.326232 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="35f1a99d-7cdf-41d2-8106-e18f5660eb1b" containerName="kube-state-metrics" containerID="cri-o://5bdf8ea7a06cf7abbed98ff2393b40a3dfc8611d3494f2dd07d07b7560fb5a46" gracePeriod=30 Feb 28 04:55:47 crc kubenswrapper[5014]: I0228 04:55:47.804471 5014 generic.go:334] "Generic (PLEG): container finished" podID="35f1a99d-7cdf-41d2-8106-e18f5660eb1b" containerID="5bdf8ea7a06cf7abbed98ff2393b40a3dfc8611d3494f2dd07d07b7560fb5a46" exitCode=2 Feb 28 04:55:47 crc kubenswrapper[5014]: I0228 04:55:47.804576 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"35f1a99d-7cdf-41d2-8106-e18f5660eb1b","Type":"ContainerDied","Data":"5bdf8ea7a06cf7abbed98ff2393b40a3dfc8611d3494f2dd07d07b7560fb5a46"} Feb 28 04:55:47 crc kubenswrapper[5014]: I0228 04:55:47.804767 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"35f1a99d-7cdf-41d2-8106-e18f5660eb1b","Type":"ContainerDied","Data":"ca0801814ddccc3eab733703a5a47e6c988760014e8a9df2c3621948ce0a44cf"} Feb 28 04:55:47 crc kubenswrapper[5014]: I0228 04:55:47.804783 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca0801814ddccc3eab733703a5a47e6c988760014e8a9df2c3621948ce0a44cf" Feb 28 04:55:47 crc kubenswrapper[5014]: I0228 04:55:47.855146 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 28 04:55:47 crc kubenswrapper[5014]: I0228 04:55:47.987370 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97n9h\" (UniqueName: \"kubernetes.io/projected/35f1a99d-7cdf-41d2-8106-e18f5660eb1b-kube-api-access-97n9h\") pod \"35f1a99d-7cdf-41d2-8106-e18f5660eb1b\" (UID: \"35f1a99d-7cdf-41d2-8106-e18f5660eb1b\") " Feb 28 04:55:47 crc kubenswrapper[5014]: I0228 04:55:47.994183 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35f1a99d-7cdf-41d2-8106-e18f5660eb1b-kube-api-access-97n9h" (OuterVolumeSpecName: "kube-api-access-97n9h") pod "35f1a99d-7cdf-41d2-8106-e18f5660eb1b" (UID: "35f1a99d-7cdf-41d2-8106-e18f5660eb1b"). InnerVolumeSpecName "kube-api-access-97n9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:55:48 crc kubenswrapper[5014]: I0228 04:55:48.089410 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97n9h\" (UniqueName: \"kubernetes.io/projected/35f1a99d-7cdf-41d2-8106-e18f5660eb1b-kube-api-access-97n9h\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:48 crc kubenswrapper[5014]: I0228 04:55:48.811698 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 28 04:55:48 crc kubenswrapper[5014]: I0228 04:55:48.832049 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 04:55:48 crc kubenswrapper[5014]: I0228 04:55:48.838611 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 04:55:48 crc kubenswrapper[5014]: I0228 04:55:48.854931 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 04:55:48 crc kubenswrapper[5014]: E0228 04:55:48.855455 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35f1a99d-7cdf-41d2-8106-e18f5660eb1b" containerName="kube-state-metrics" Feb 28 04:55:48 crc kubenswrapper[5014]: I0228 04:55:48.855520 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="35f1a99d-7cdf-41d2-8106-e18f5660eb1b" containerName="kube-state-metrics" Feb 28 04:55:48 crc kubenswrapper[5014]: I0228 04:55:48.855762 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="35f1a99d-7cdf-41d2-8106-e18f5660eb1b" containerName="kube-state-metrics" Feb 28 04:55:48 crc kubenswrapper[5014]: I0228 04:55:48.856423 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 28 04:55:48 crc kubenswrapper[5014]: I0228 04:55:48.862111 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 28 04:55:48 crc kubenswrapper[5014]: I0228 04:55:48.862149 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 28 04:55:48 crc kubenswrapper[5014]: I0228 04:55:48.866489 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.004434 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/020d4ca7-8d28-4954-a4a0-c031eb935a21-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"020d4ca7-8d28-4954-a4a0-c031eb935a21\") " pod="openstack/kube-state-metrics-0" Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.004899 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/020d4ca7-8d28-4954-a4a0-c031eb935a21-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"020d4ca7-8d28-4954-a4a0-c031eb935a21\") " pod="openstack/kube-state-metrics-0" Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.004956 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/020d4ca7-8d28-4954-a4a0-c031eb935a21-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"020d4ca7-8d28-4954-a4a0-c031eb935a21\") " pod="openstack/kube-state-metrics-0" Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.004985 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqpmh\" (UniqueName: \"kubernetes.io/projected/020d4ca7-8d28-4954-a4a0-c031eb935a21-kube-api-access-fqpmh\") pod \"kube-state-metrics-0\" (UID: \"020d4ca7-8d28-4954-a4a0-c031eb935a21\") " pod="openstack/kube-state-metrics-0" Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.082304 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.107236 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/020d4ca7-8d28-4954-a4a0-c031eb935a21-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"020d4ca7-8d28-4954-a4a0-c031eb935a21\") " pod="openstack/kube-state-metrics-0" Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.107497 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/020d4ca7-8d28-4954-a4a0-c031eb935a21-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"020d4ca7-8d28-4954-a4a0-c031eb935a21\") " pod="openstack/kube-state-metrics-0" Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.107540 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/020d4ca7-8d28-4954-a4a0-c031eb935a21-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"020d4ca7-8d28-4954-a4a0-c031eb935a21\") " pod="openstack/kube-state-metrics-0" Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.107633 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqpmh\" (UniqueName: \"kubernetes.io/projected/020d4ca7-8d28-4954-a4a0-c031eb935a21-kube-api-access-fqpmh\") pod \"kube-state-metrics-0\" (UID: \"020d4ca7-8d28-4954-a4a0-c031eb935a21\") " pod="openstack/kube-state-metrics-0" Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.114229 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/020d4ca7-8d28-4954-a4a0-c031eb935a21-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"020d4ca7-8d28-4954-a4a0-c031eb935a21\") " pod="openstack/kube-state-metrics-0" Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.115333 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/020d4ca7-8d28-4954-a4a0-c031eb935a21-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"020d4ca7-8d28-4954-a4a0-c031eb935a21\") " pod="openstack/kube-state-metrics-0" Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.115700 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/020d4ca7-8d28-4954-a4a0-c031eb935a21-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"020d4ca7-8d28-4954-a4a0-c031eb935a21\") " pod="openstack/kube-state-metrics-0" Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.118952 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.128293 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqpmh\" (UniqueName: \"kubernetes.io/projected/020d4ca7-8d28-4954-a4a0-c031eb935a21-kube-api-access-fqpmh\") pod \"kube-state-metrics-0\" (UID: \"020d4ca7-8d28-4954-a4a0-c031eb935a21\") " pod="openstack/kube-state-metrics-0" Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.193288 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.240028 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.240290 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66f74e9e-e211-467c-a1a4-93a01ff93dd1" containerName="ceilometer-central-agent" containerID="cri-o://df2b1fed0b546adf18bd5346ef003f7add4379fdd3ce9c4a4e1102d6504e8cbb" gracePeriod=30 Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.240404 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66f74e9e-e211-467c-a1a4-93a01ff93dd1" containerName="ceilometer-notification-agent" containerID="cri-o://b064ffa2970fac4c6c85bac6219a8a5822bfbb6c85df40e07e8d32d5afe5244a" gracePeriod=30 Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.240412 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66f74e9e-e211-467c-a1a4-93a01ff93dd1" containerName="sg-core" containerID="cri-o://d5aababe7bdfb415d477854d0ce21bbc6bc6951eef00c94f73b554db95872510" gracePeriod=30 Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.240452 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="66f74e9e-e211-467c-a1a4-93a01ff93dd1" containerName="proxy-httpd" containerID="cri-o://838d192b24556a3c9deb83806ca8561b630f516a6b4db2006248bd85156badaa" gracePeriod=30 Feb 28 04:55:49 crc kubenswrapper[5014]: W0228 04:55:49.707709 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod020d4ca7_8d28_4954_a4a0_c031eb935a21.slice/crio-ce7cebcb706c2fab0cae95efc53e71443e6b835979f8e82dec97c5905933d2f3 WatchSource:0}: Error finding container ce7cebcb706c2fab0cae95efc53e71443e6b835979f8e82dec97c5905933d2f3: Status 404 returned error can't find the container with id ce7cebcb706c2fab0cae95efc53e71443e6b835979f8e82dec97c5905933d2f3 Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.710293 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.822501 5014 generic.go:334] "Generic (PLEG): container finished" podID="66f74e9e-e211-467c-a1a4-93a01ff93dd1" containerID="838d192b24556a3c9deb83806ca8561b630f516a6b4db2006248bd85156badaa" exitCode=0 Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.822562 5014 generic.go:334] "Generic (PLEG): container finished" podID="66f74e9e-e211-467c-a1a4-93a01ff93dd1" containerID="d5aababe7bdfb415d477854d0ce21bbc6bc6951eef00c94f73b554db95872510" exitCode=2 Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.822571 5014 generic.go:334] "Generic (PLEG): container finished" podID="66f74e9e-e211-467c-a1a4-93a01ff93dd1" containerID="df2b1fed0b546adf18bd5346ef003f7add4379fdd3ce9c4a4e1102d6504e8cbb" exitCode=0 Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.822573 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66f74e9e-e211-467c-a1a4-93a01ff93dd1","Type":"ContainerDied","Data":"838d192b24556a3c9deb83806ca8561b630f516a6b4db2006248bd85156badaa"} Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.822627 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66f74e9e-e211-467c-a1a4-93a01ff93dd1","Type":"ContainerDied","Data":"d5aababe7bdfb415d477854d0ce21bbc6bc6951eef00c94f73b554db95872510"} Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.822640 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66f74e9e-e211-467c-a1a4-93a01ff93dd1","Type":"ContainerDied","Data":"df2b1fed0b546adf18bd5346ef003f7add4379fdd3ce9c4a4e1102d6504e8cbb"} Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.823785 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"020d4ca7-8d28-4954-a4a0-c031eb935a21","Type":"ContainerStarted","Data":"ce7cebcb706c2fab0cae95efc53e71443e6b835979f8e82dec97c5905933d2f3"} Feb 28 04:55:49 crc kubenswrapper[5014]: I0228 04:55:49.853061 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 28 04:55:50 crc kubenswrapper[5014]: I0228 04:55:50.084615 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 04:55:50 crc kubenswrapper[5014]: I0228 04:55:50.084675 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 04:55:50 crc kubenswrapper[5014]: I0228 04:55:50.181795 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35f1a99d-7cdf-41d2-8106-e18f5660eb1b" path="/var/lib/kubelet/pods/35f1a99d-7cdf-41d2-8106-e18f5660eb1b/volumes" Feb 28 04:55:50 crc kubenswrapper[5014]: I0228 04:55:50.837846 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"020d4ca7-8d28-4954-a4a0-c031eb935a21","Type":"ContainerStarted","Data":"c1f423fee120994ccaa5fd6d39fa65234a50f1a496a87ef23d0675de6a34aa3f"} Feb 28 04:55:50 crc kubenswrapper[5014]: I0228 04:55:50.838146 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 28 04:55:50 crc kubenswrapper[5014]: I0228 04:55:50.858181 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.309516738 podStartE2EDuration="2.858158272s" podCreationTimestamp="2026-02-28 04:55:48 +0000 UTC" firstStartedPulling="2026-02-28 04:55:49.712528908 +0000 UTC m=+1338.382654818" lastFinishedPulling="2026-02-28 04:55:50.261170442 +0000 UTC m=+1338.931296352" observedRunningTime="2026-02-28 04:55:50.856780175 +0000 UTC m=+1339.526906085" watchObservedRunningTime="2026-02-28 04:55:50.858158272 +0000 UTC m=+1339.528284182" Feb 28 04:55:51 crc kubenswrapper[5014]: I0228 04:55:51.167048 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eb100331-cd16-4875-8529-b7e34aaa385e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 04:55:51 crc kubenswrapper[5014]: I0228 04:55:51.167103 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="eb100331-cd16-4875-8529-b7e34aaa385e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 04:55:51 crc kubenswrapper[5014]: I0228 04:55:51.851081 5014 generic.go:334] "Generic (PLEG): container finished" podID="66f74e9e-e211-467c-a1a4-93a01ff93dd1" containerID="b064ffa2970fac4c6c85bac6219a8a5822bfbb6c85df40e07e8d32d5afe5244a" exitCode=0 Feb 28 04:55:51 crc kubenswrapper[5014]: I0228 04:55:51.851179 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66f74e9e-e211-467c-a1a4-93a01ff93dd1","Type":"ContainerDied","Data":"b064ffa2970fac4c6c85bac6219a8a5822bfbb6c85df40e07e8d32d5afe5244a"} Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.251011 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.417079 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66f74e9e-e211-467c-a1a4-93a01ff93dd1-config-data\") pod \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.417265 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66f74e9e-e211-467c-a1a4-93a01ff93dd1-log-httpd\") pod \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.417328 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66f74e9e-e211-467c-a1a4-93a01ff93dd1-scripts\") pod \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.417404 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66f74e9e-e211-467c-a1a4-93a01ff93dd1-run-httpd\") pod \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.417466 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66f74e9e-e211-467c-a1a4-93a01ff93dd1-sg-core-conf-yaml\") pod \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.417510 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66f74e9e-e211-467c-a1a4-93a01ff93dd1-combined-ca-bundle\") pod \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.417593 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d45n9\" (UniqueName: \"kubernetes.io/projected/66f74e9e-e211-467c-a1a4-93a01ff93dd1-kube-api-access-d45n9\") pod \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\" (UID: \"66f74e9e-e211-467c-a1a4-93a01ff93dd1\") " Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.419045 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66f74e9e-e211-467c-a1a4-93a01ff93dd1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "66f74e9e-e211-467c-a1a4-93a01ff93dd1" (UID: "66f74e9e-e211-467c-a1a4-93a01ff93dd1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.419325 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66f74e9e-e211-467c-a1a4-93a01ff93dd1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "66f74e9e-e211-467c-a1a4-93a01ff93dd1" (UID: "66f74e9e-e211-467c-a1a4-93a01ff93dd1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.425609 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66f74e9e-e211-467c-a1a4-93a01ff93dd1-kube-api-access-d45n9" (OuterVolumeSpecName: "kube-api-access-d45n9") pod "66f74e9e-e211-467c-a1a4-93a01ff93dd1" (UID: "66f74e9e-e211-467c-a1a4-93a01ff93dd1"). InnerVolumeSpecName "kube-api-access-d45n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.438040 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66f74e9e-e211-467c-a1a4-93a01ff93dd1-scripts" (OuterVolumeSpecName: "scripts") pod "66f74e9e-e211-467c-a1a4-93a01ff93dd1" (UID: "66f74e9e-e211-467c-a1a4-93a01ff93dd1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.517940 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66f74e9e-e211-467c-a1a4-93a01ff93dd1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "66f74e9e-e211-467c-a1a4-93a01ff93dd1" (UID: "66f74e9e-e211-467c-a1a4-93a01ff93dd1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.521183 5014 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/66f74e9e-e211-467c-a1a4-93a01ff93dd1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.521231 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d45n9\" (UniqueName: \"kubernetes.io/projected/66f74e9e-e211-467c-a1a4-93a01ff93dd1-kube-api-access-d45n9\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.521247 5014 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66f74e9e-e211-467c-a1a4-93a01ff93dd1-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.521258 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/66f74e9e-e211-467c-a1a4-93a01ff93dd1-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.521269 5014 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/66f74e9e-e211-467c-a1a4-93a01ff93dd1-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.542976 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66f74e9e-e211-467c-a1a4-93a01ff93dd1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "66f74e9e-e211-467c-a1a4-93a01ff93dd1" (UID: "66f74e9e-e211-467c-a1a4-93a01ff93dd1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.550077 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66f74e9e-e211-467c-a1a4-93a01ff93dd1-config-data" (OuterVolumeSpecName: "config-data") pod "66f74e9e-e211-467c-a1a4-93a01ff93dd1" (UID: "66f74e9e-e211-467c-a1a4-93a01ff93dd1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.623913 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/66f74e9e-e211-467c-a1a4-93a01ff93dd1-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.623952 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/66f74e9e-e211-467c-a1a4-93a01ff93dd1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.872043 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"66f74e9e-e211-467c-a1a4-93a01ff93dd1","Type":"ContainerDied","Data":"8e77a3f0119574315e0938fb575bebe7653432e52b5efef6b4e9ee96366cb950"} Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.872444 5014 scope.go:117] "RemoveContainer" containerID="838d192b24556a3c9deb83806ca8561b630f516a6b4db2006248bd85156badaa" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.872096 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.894601 5014 scope.go:117] "RemoveContainer" containerID="d5aababe7bdfb415d477854d0ce21bbc6bc6951eef00c94f73b554db95872510" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.918407 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.923279 5014 scope.go:117] "RemoveContainer" containerID="b064ffa2970fac4c6c85bac6219a8a5822bfbb6c85df40e07e8d32d5afe5244a" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.935927 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.960509 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:55:52 crc kubenswrapper[5014]: E0228 04:55:52.960911 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66f74e9e-e211-467c-a1a4-93a01ff93dd1" containerName="ceilometer-central-agent" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.960925 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="66f74e9e-e211-467c-a1a4-93a01ff93dd1" containerName="ceilometer-central-agent" Feb 28 04:55:52 crc kubenswrapper[5014]: E0228 04:55:52.960944 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66f74e9e-e211-467c-a1a4-93a01ff93dd1" containerName="sg-core" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.960950 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="66f74e9e-e211-467c-a1a4-93a01ff93dd1" containerName="sg-core" Feb 28 04:55:52 crc kubenswrapper[5014]: E0228 04:55:52.960965 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66f74e9e-e211-467c-a1a4-93a01ff93dd1" containerName="proxy-httpd" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.960973 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="66f74e9e-e211-467c-a1a4-93a01ff93dd1" containerName="proxy-httpd" Feb 28 04:55:52 crc kubenswrapper[5014]: E0228 04:55:52.960994 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66f74e9e-e211-467c-a1a4-93a01ff93dd1" containerName="ceilometer-notification-agent" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.961001 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="66f74e9e-e211-467c-a1a4-93a01ff93dd1" containerName="ceilometer-notification-agent" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.961163 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="66f74e9e-e211-467c-a1a4-93a01ff93dd1" containerName="ceilometer-central-agent" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.961176 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="66f74e9e-e211-467c-a1a4-93a01ff93dd1" containerName="sg-core" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.961192 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="66f74e9e-e211-467c-a1a4-93a01ff93dd1" containerName="proxy-httpd" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.961203 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="66f74e9e-e211-467c-a1a4-93a01ff93dd1" containerName="ceilometer-notification-agent" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.962744 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.966577 5014 scope.go:117] "RemoveContainer" containerID="df2b1fed0b546adf18bd5346ef003f7add4379fdd3ce9c4a4e1102d6504e8cbb" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.966904 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.967105 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.967213 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 04:55:52 crc kubenswrapper[5014]: I0228 04:55:52.975610 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.133479 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7bfdbf3-8c37-4c43-b266-12b88843c085-log-httpd\") pod \"ceilometer-0\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.133597 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.133639 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.133727 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sk2m\" (UniqueName: \"kubernetes.io/projected/a7bfdbf3-8c37-4c43-b266-12b88843c085-kube-api-access-8sk2m\") pod \"ceilometer-0\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.133970 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.134032 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-scripts\") pod \"ceilometer-0\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.134085 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7bfdbf3-8c37-4c43-b266-12b88843c085-run-httpd\") pod \"ceilometer-0\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.134109 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-config-data\") pod \"ceilometer-0\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.235403 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7bfdbf3-8c37-4c43-b266-12b88843c085-log-httpd\") pod \"ceilometer-0\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.235472 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.235493 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.235549 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sk2m\" (UniqueName: \"kubernetes.io/projected/a7bfdbf3-8c37-4c43-b266-12b88843c085-kube-api-access-8sk2m\") pod \"ceilometer-0\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.235573 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.235593 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-scripts\") pod \"ceilometer-0\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.235617 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7bfdbf3-8c37-4c43-b266-12b88843c085-run-httpd\") pod \"ceilometer-0\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.235631 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-config-data\") pod \"ceilometer-0\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.237188 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7bfdbf3-8c37-4c43-b266-12b88843c085-log-httpd\") pod \"ceilometer-0\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.237197 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7bfdbf3-8c37-4c43-b266-12b88843c085-run-httpd\") pod \"ceilometer-0\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.240410 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-scripts\") pod \"ceilometer-0\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.242533 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.242719 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.242756 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.247947 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-config-data\") pod \"ceilometer-0\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.255049 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sk2m\" (UniqueName: \"kubernetes.io/projected/a7bfdbf3-8c37-4c43-b266-12b88843c085-kube-api-access-8sk2m\") pod \"ceilometer-0\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.299215 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.782525 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:55:53 crc kubenswrapper[5014]: I0228 04:55:53.879549 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7bfdbf3-8c37-4c43-b266-12b88843c085","Type":"ContainerStarted","Data":"d730cecd7d5c06937b6e71129fd2aa307370d084924f289fc668b8ea884a6017"} Feb 28 04:55:54 crc kubenswrapper[5014]: I0228 04:55:54.182911 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66f74e9e-e211-467c-a1a4-93a01ff93dd1" path="/var/lib/kubelet/pods/66f74e9e-e211-467c-a1a4-93a01ff93dd1/volumes" Feb 28 04:55:54 crc kubenswrapper[5014]: I0228 04:55:54.898262 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7bfdbf3-8c37-4c43-b266-12b88843c085","Type":"ContainerStarted","Data":"23de5b50b110e5218b34964b174fecfb86edd0372b410dd3a6c653980146e08c"} Feb 28 04:55:55 crc kubenswrapper[5014]: I0228 04:55:55.035151 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 28 04:55:55 crc kubenswrapper[5014]: I0228 04:55:55.037546 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 28 04:55:55 crc kubenswrapper[5014]: I0228 04:55:55.042298 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 28 04:55:55 crc kubenswrapper[5014]: I0228 04:55:55.909913 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7bfdbf3-8c37-4c43-b266-12b88843c085","Type":"ContainerStarted","Data":"59d96545945588c50d45f08d22daac6cae57921e63c4c77238d5c44759fd0b51"} Feb 28 04:55:55 crc kubenswrapper[5014]: I0228 04:55:55.910214 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7bfdbf3-8c37-4c43-b266-12b88843c085","Type":"ContainerStarted","Data":"03ea790764897836fbdb9c9eb74b2b8951839c5345b065fcb8c6fdd614077fc9"} Feb 28 04:55:55 crc kubenswrapper[5014]: I0228 04:55:55.915081 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 28 04:55:56 crc kubenswrapper[5014]: I0228 04:55:56.865982 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:56 crc kubenswrapper[5014]: I0228 04:55:56.919747 5014 generic.go:334] "Generic (PLEG): container finished" podID="4c2600df-f028-4e93-82c5-c25cb1112ffb" containerID="be15dc1ba18bd66140d3775f378871bb8f6ffd240268605f8de10a63eedfc9d2" exitCode=137 Feb 28 04:55:56 crc kubenswrapper[5014]: I0228 04:55:56.919832 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:56 crc kubenswrapper[5014]: I0228 04:55:56.919841 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4c2600df-f028-4e93-82c5-c25cb1112ffb","Type":"ContainerDied","Data":"be15dc1ba18bd66140d3775f378871bb8f6ffd240268605f8de10a63eedfc9d2"} Feb 28 04:55:56 crc kubenswrapper[5014]: I0228 04:55:56.920555 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4c2600df-f028-4e93-82c5-c25cb1112ffb","Type":"ContainerDied","Data":"1e1cdc95d010bbf49321e43e8a1a8c04d12443a04f6e25e51f5c33c54f332466"} Feb 28 04:55:56 crc kubenswrapper[5014]: I0228 04:55:56.920577 5014 scope.go:117] "RemoveContainer" containerID="be15dc1ba18bd66140d3775f378871bb8f6ffd240268605f8de10a63eedfc9d2" Feb 28 04:55:56 crc kubenswrapper[5014]: I0228 04:55:56.944820 5014 scope.go:117] "RemoveContainer" containerID="be15dc1ba18bd66140d3775f378871bb8f6ffd240268605f8de10a63eedfc9d2" Feb 28 04:55:56 crc kubenswrapper[5014]: E0228 04:55:56.945253 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be15dc1ba18bd66140d3775f378871bb8f6ffd240268605f8de10a63eedfc9d2\": container with ID starting with be15dc1ba18bd66140d3775f378871bb8f6ffd240268605f8de10a63eedfc9d2 not found: ID does not exist" containerID="be15dc1ba18bd66140d3775f378871bb8f6ffd240268605f8de10a63eedfc9d2" Feb 28 04:55:56 crc kubenswrapper[5014]: I0228 04:55:56.945286 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be15dc1ba18bd66140d3775f378871bb8f6ffd240268605f8de10a63eedfc9d2"} err="failed to get container status \"be15dc1ba18bd66140d3775f378871bb8f6ffd240268605f8de10a63eedfc9d2\": rpc error: code = NotFound desc = could not find container \"be15dc1ba18bd66140d3775f378871bb8f6ffd240268605f8de10a63eedfc9d2\": container with ID starting with be15dc1ba18bd66140d3775f378871bb8f6ffd240268605f8de10a63eedfc9d2 not found: ID does not exist" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.007390 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhqx8\" (UniqueName: \"kubernetes.io/projected/4c2600df-f028-4e93-82c5-c25cb1112ffb-kube-api-access-vhqx8\") pod \"4c2600df-f028-4e93-82c5-c25cb1112ffb\" (UID: \"4c2600df-f028-4e93-82c5-c25cb1112ffb\") " Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.007745 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2600df-f028-4e93-82c5-c25cb1112ffb-config-data\") pod \"4c2600df-f028-4e93-82c5-c25cb1112ffb\" (UID: \"4c2600df-f028-4e93-82c5-c25cb1112ffb\") " Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.007919 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2600df-f028-4e93-82c5-c25cb1112ffb-combined-ca-bundle\") pod \"4c2600df-f028-4e93-82c5-c25cb1112ffb\" (UID: \"4c2600df-f028-4e93-82c5-c25cb1112ffb\") " Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.021651 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c2600df-f028-4e93-82c5-c25cb1112ffb-kube-api-access-vhqx8" (OuterVolumeSpecName: "kube-api-access-vhqx8") pod "4c2600df-f028-4e93-82c5-c25cb1112ffb" (UID: "4c2600df-f028-4e93-82c5-c25cb1112ffb"). InnerVolumeSpecName "kube-api-access-vhqx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.045920 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c2600df-f028-4e93-82c5-c25cb1112ffb-config-data" (OuterVolumeSpecName: "config-data") pod "4c2600df-f028-4e93-82c5-c25cb1112ffb" (UID: "4c2600df-f028-4e93-82c5-c25cb1112ffb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.057887 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c2600df-f028-4e93-82c5-c25cb1112ffb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c2600df-f028-4e93-82c5-c25cb1112ffb" (UID: "4c2600df-f028-4e93-82c5-c25cb1112ffb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.110298 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhqx8\" (UniqueName: \"kubernetes.io/projected/4c2600df-f028-4e93-82c5-c25cb1112ffb-kube-api-access-vhqx8\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.110612 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c2600df-f028-4e93-82c5-c25cb1112ffb-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.110627 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c2600df-f028-4e93-82c5-c25cb1112ffb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.258786 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.270786 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.281517 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 04:55:57 crc kubenswrapper[5014]: E0228 04:55:57.282001 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c2600df-f028-4e93-82c5-c25cb1112ffb" containerName="nova-cell1-novncproxy-novncproxy" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.282024 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c2600df-f028-4e93-82c5-c25cb1112ffb" containerName="nova-cell1-novncproxy-novncproxy" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.282287 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c2600df-f028-4e93-82c5-c25cb1112ffb" containerName="nova-cell1-novncproxy-novncproxy" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.283056 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.288412 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.288491 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.289329 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.292442 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.415765 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/974c3323-4513-41b7-9c2e-7cb58d91d6f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"974c3323-4513-41b7-9c2e-7cb58d91d6f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.415882 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr8r7\" (UniqueName: \"kubernetes.io/projected/974c3323-4513-41b7-9c2e-7cb58d91d6f1-kube-api-access-vr8r7\") pod \"nova-cell1-novncproxy-0\" (UID: \"974c3323-4513-41b7-9c2e-7cb58d91d6f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.415943 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/974c3323-4513-41b7-9c2e-7cb58d91d6f1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"974c3323-4513-41b7-9c2e-7cb58d91d6f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.415963 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/974c3323-4513-41b7-9c2e-7cb58d91d6f1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"974c3323-4513-41b7-9c2e-7cb58d91d6f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.416029 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/974c3323-4513-41b7-9c2e-7cb58d91d6f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"974c3323-4513-41b7-9c2e-7cb58d91d6f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.517597 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/974c3323-4513-41b7-9c2e-7cb58d91d6f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"974c3323-4513-41b7-9c2e-7cb58d91d6f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.517883 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr8r7\" (UniqueName: \"kubernetes.io/projected/974c3323-4513-41b7-9c2e-7cb58d91d6f1-kube-api-access-vr8r7\") pod \"nova-cell1-novncproxy-0\" (UID: \"974c3323-4513-41b7-9c2e-7cb58d91d6f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.518037 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/974c3323-4513-41b7-9c2e-7cb58d91d6f1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"974c3323-4513-41b7-9c2e-7cb58d91d6f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.518121 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/974c3323-4513-41b7-9c2e-7cb58d91d6f1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"974c3323-4513-41b7-9c2e-7cb58d91d6f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.518243 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/974c3323-4513-41b7-9c2e-7cb58d91d6f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"974c3323-4513-41b7-9c2e-7cb58d91d6f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.522050 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/974c3323-4513-41b7-9c2e-7cb58d91d6f1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"974c3323-4513-41b7-9c2e-7cb58d91d6f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.522068 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/974c3323-4513-41b7-9c2e-7cb58d91d6f1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"974c3323-4513-41b7-9c2e-7cb58d91d6f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.532428 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/974c3323-4513-41b7-9c2e-7cb58d91d6f1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"974c3323-4513-41b7-9c2e-7cb58d91d6f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.532428 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/974c3323-4513-41b7-9c2e-7cb58d91d6f1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"974c3323-4513-41b7-9c2e-7cb58d91d6f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.545277 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr8r7\" (UniqueName: \"kubernetes.io/projected/974c3323-4513-41b7-9c2e-7cb58d91d6f1-kube-api-access-vr8r7\") pod \"nova-cell1-novncproxy-0\" (UID: \"974c3323-4513-41b7-9c2e-7cb58d91d6f1\") " pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.602992 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.954973 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7bfdbf3-8c37-4c43-b266-12b88843c085","Type":"ContainerStarted","Data":"10db8c22833f304a2f59bc8ff84c619fe3713e740a5df5ccd2fff96e8c45de76"} Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.955284 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 04:55:57 crc kubenswrapper[5014]: I0228 04:55:57.983026 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.069527731 podStartE2EDuration="5.98300692s" podCreationTimestamp="2026-02-28 04:55:52 +0000 UTC" firstStartedPulling="2026-02-28 04:55:53.797750287 +0000 UTC m=+1342.467876227" lastFinishedPulling="2026-02-28 04:55:57.711229506 +0000 UTC m=+1346.381355416" observedRunningTime="2026-02-28 04:55:57.981195662 +0000 UTC m=+1346.651321582" watchObservedRunningTime="2026-02-28 04:55:57.98300692 +0000 UTC m=+1346.653132840" Feb 28 04:55:58 crc kubenswrapper[5014]: I0228 04:55:58.085217 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 28 04:55:58 crc kubenswrapper[5014]: W0228 04:55:58.094512 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod974c3323_4513_41b7_9c2e_7cb58d91d6f1.slice/crio-83d2ec362bee15584841484f962cf5125cf2c56d18b26fea52df0e317041efc5 WatchSource:0}: Error finding container 83d2ec362bee15584841484f962cf5125cf2c56d18b26fea52df0e317041efc5: Status 404 returned error can't find the container with id 83d2ec362bee15584841484f962cf5125cf2c56d18b26fea52df0e317041efc5 Feb 28 04:55:58 crc kubenswrapper[5014]: I0228 04:55:58.186320 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c2600df-f028-4e93-82c5-c25cb1112ffb" path="/var/lib/kubelet/pods/4c2600df-f028-4e93-82c5-c25cb1112ffb/volumes" Feb 28 04:55:58 crc kubenswrapper[5014]: I0228 04:55:58.974958 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"974c3323-4513-41b7-9c2e-7cb58d91d6f1","Type":"ContainerStarted","Data":"031dff66acdd577bca00a908bfe5ec6fb44e2129d23738f143f6659d2924088d"} Feb 28 04:55:58 crc kubenswrapper[5014]: I0228 04:55:58.975230 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"974c3323-4513-41b7-9c2e-7cb58d91d6f1","Type":"ContainerStarted","Data":"83d2ec362bee15584841484f962cf5125cf2c56d18b26fea52df0e317041efc5"} Feb 28 04:55:59 crc kubenswrapper[5014]: I0228 04:55:59.004490 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.004468751 podStartE2EDuration="2.004468751s" podCreationTimestamp="2026-02-28 04:55:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:55:58.996186679 +0000 UTC m=+1347.666312589" watchObservedRunningTime="2026-02-28 04:55:59.004468751 +0000 UTC m=+1347.674594661" Feb 28 04:55:59 crc kubenswrapper[5014]: I0228 04:55:59.209185 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 28 04:56:00 crc kubenswrapper[5014]: I0228 04:56:00.088395 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 28 04:56:00 crc kubenswrapper[5014]: I0228 04:56:00.088853 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 28 04:56:00 crc kubenswrapper[5014]: I0228 04:56:00.088894 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 28 04:56:00 crc kubenswrapper[5014]: I0228 04:56:00.092067 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 28 04:56:00 crc kubenswrapper[5014]: I0228 04:56:00.161951 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537576-mx47j"] Feb 28 04:56:00 crc kubenswrapper[5014]: I0228 04:56:00.163599 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537576-mx47j" Feb 28 04:56:00 crc kubenswrapper[5014]: I0228 04:56:00.165649 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 04:56:00 crc kubenswrapper[5014]: I0228 04:56:00.165766 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:56:00 crc kubenswrapper[5014]: I0228 04:56:00.177686 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:56:00 crc kubenswrapper[5014]: I0228 04:56:00.183932 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537576-mx47j"] Feb 28 04:56:00 crc kubenswrapper[5014]: I0228 04:56:00.274138 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv5g6\" (UniqueName: \"kubernetes.io/projected/a3702cbc-ce6b-4f93-9015-bd7cdc462025-kube-api-access-lv5g6\") pod \"auto-csr-approver-29537576-mx47j\" (UID: \"a3702cbc-ce6b-4f93-9015-bd7cdc462025\") " pod="openshift-infra/auto-csr-approver-29537576-mx47j" Feb 28 04:56:00 crc kubenswrapper[5014]: I0228 04:56:00.375703 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lv5g6\" (UniqueName: \"kubernetes.io/projected/a3702cbc-ce6b-4f93-9015-bd7cdc462025-kube-api-access-lv5g6\") pod \"auto-csr-approver-29537576-mx47j\" (UID: \"a3702cbc-ce6b-4f93-9015-bd7cdc462025\") " pod="openshift-infra/auto-csr-approver-29537576-mx47j" Feb 28 04:56:00 crc kubenswrapper[5014]: I0228 04:56:00.430138 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lv5g6\" (UniqueName: \"kubernetes.io/projected/a3702cbc-ce6b-4f93-9015-bd7cdc462025-kube-api-access-lv5g6\") pod \"auto-csr-approver-29537576-mx47j\" (UID: \"a3702cbc-ce6b-4f93-9015-bd7cdc462025\") " pod="openshift-infra/auto-csr-approver-29537576-mx47j" Feb 28 04:56:00 crc kubenswrapper[5014]: I0228 04:56:00.504658 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537576-mx47j" Feb 28 04:56:00 crc kubenswrapper[5014]: I0228 04:56:00.995795 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 28 04:56:01 crc kubenswrapper[5014]: I0228 04:56:01.000169 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 28 04:56:01 crc kubenswrapper[5014]: W0228 04:56:01.025474 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda3702cbc_ce6b_4f93_9015_bd7cdc462025.slice/crio-6a175dc0fc1594d15b541673d17a2dc305822ba9035226512ff060b46943e3c1 WatchSource:0}: Error finding container 6a175dc0fc1594d15b541673d17a2dc305822ba9035226512ff060b46943e3c1: Status 404 returned error can't find the container with id 6a175dc0fc1594d15b541673d17a2dc305822ba9035226512ff060b46943e3c1 Feb 28 04:56:01 crc kubenswrapper[5014]: I0228 04:56:01.037004 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537576-mx47j"] Feb 28 04:56:01 crc kubenswrapper[5014]: I0228 04:56:01.183446 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-4b7rx"] Feb 28 04:56:01 crc kubenswrapper[5014]: I0228 04:56:01.188075 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" Feb 28 04:56:01 crc kubenswrapper[5014]: I0228 04:56:01.201369 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-4b7rx"] Feb 28 04:56:01 crc kubenswrapper[5014]: I0228 04:56:01.300154 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-4b7rx\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" Feb 28 04:56:01 crc kubenswrapper[5014]: I0228 04:56:01.300447 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-4b7rx\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" Feb 28 04:56:01 crc kubenswrapper[5014]: I0228 04:56:01.300468 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9729\" (UniqueName: \"kubernetes.io/projected/0c076180-9ead-4605-84e7-d0d920d19cdb-kube-api-access-z9729\") pod \"dnsmasq-dns-89c5cd4d5-4b7rx\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" Feb 28 04:56:01 crc kubenswrapper[5014]: I0228 04:56:01.300584 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-config\") pod \"dnsmasq-dns-89c5cd4d5-4b7rx\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" Feb 28 04:56:01 crc kubenswrapper[5014]: I0228 04:56:01.300616 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-4b7rx\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" Feb 28 04:56:01 crc kubenswrapper[5014]: I0228 04:56:01.300703 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-4b7rx\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" Feb 28 04:56:01 crc kubenswrapper[5014]: I0228 04:56:01.402721 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-4b7rx\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" Feb 28 04:56:01 crc kubenswrapper[5014]: I0228 04:56:01.402777 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-4b7rx\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" Feb 28 04:56:01 crc kubenswrapper[5014]: I0228 04:56:01.402818 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9729\" (UniqueName: \"kubernetes.io/projected/0c076180-9ead-4605-84e7-d0d920d19cdb-kube-api-access-z9729\") pod \"dnsmasq-dns-89c5cd4d5-4b7rx\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" Feb 28 04:56:01 crc kubenswrapper[5014]: I0228 04:56:01.402893 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-config\") pod \"dnsmasq-dns-89c5cd4d5-4b7rx\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" Feb 28 04:56:01 crc kubenswrapper[5014]: I0228 04:56:01.402917 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-4b7rx\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" Feb 28 04:56:01 crc kubenswrapper[5014]: I0228 04:56:01.402957 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-4b7rx\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" Feb 28 04:56:01 crc kubenswrapper[5014]: I0228 04:56:01.403916 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-ovsdbserver-nb\") pod \"dnsmasq-dns-89c5cd4d5-4b7rx\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" Feb 28 04:56:01 crc kubenswrapper[5014]: I0228 04:56:01.403958 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-dns-swift-storage-0\") pod \"dnsmasq-dns-89c5cd4d5-4b7rx\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" Feb 28 04:56:01 crc kubenswrapper[5014]: I0228 04:56:01.404241 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-dns-svc\") pod \"dnsmasq-dns-89c5cd4d5-4b7rx\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" Feb 28 04:56:01 crc kubenswrapper[5014]: I0228 04:56:01.404273 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-config\") pod \"dnsmasq-dns-89c5cd4d5-4b7rx\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" Feb 28 04:56:01 crc kubenswrapper[5014]: I0228 04:56:01.404696 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-ovsdbserver-sb\") pod \"dnsmasq-dns-89c5cd4d5-4b7rx\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" Feb 28 04:56:01 crc kubenswrapper[5014]: I0228 04:56:01.432785 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9729\" (UniqueName: \"kubernetes.io/projected/0c076180-9ead-4605-84e7-d0d920d19cdb-kube-api-access-z9729\") pod \"dnsmasq-dns-89c5cd4d5-4b7rx\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" Feb 28 04:56:01 crc kubenswrapper[5014]: I0228 04:56:01.526082 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" Feb 28 04:56:02 crc kubenswrapper[5014]: I0228 04:56:02.012417 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537576-mx47j" event={"ID":"a3702cbc-ce6b-4f93-9015-bd7cdc462025","Type":"ContainerStarted","Data":"6a175dc0fc1594d15b541673d17a2dc305822ba9035226512ff060b46943e3c1"} Feb 28 04:56:02 crc kubenswrapper[5014]: I0228 04:56:02.027641 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-4b7rx"] Feb 28 04:56:02 crc kubenswrapper[5014]: W0228 04:56:02.029333 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c076180_9ead_4605_84e7_d0d920d19cdb.slice/crio-090b73ae7121ff0e2e0e8babf5acf5c269491a77ba512cf383e5c628b9d76304 WatchSource:0}: Error finding container 090b73ae7121ff0e2e0e8babf5acf5c269491a77ba512cf383e5c628b9d76304: Status 404 returned error can't find the container with id 090b73ae7121ff0e2e0e8babf5acf5c269491a77ba512cf383e5c628b9d76304 Feb 28 04:56:02 crc kubenswrapper[5014]: I0228 04:56:02.603306 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:56:03 crc kubenswrapper[5014]: I0228 04:56:03.029595 5014 generic.go:334] "Generic (PLEG): container finished" podID="a3702cbc-ce6b-4f93-9015-bd7cdc462025" containerID="8e045b92cef9362d67b5d4ed98632aa9f63c689047ceff522638c5235d5ee134" exitCode=0 Feb 28 04:56:03 crc kubenswrapper[5014]: I0228 04:56:03.029698 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537576-mx47j" event={"ID":"a3702cbc-ce6b-4f93-9015-bd7cdc462025","Type":"ContainerDied","Data":"8e045b92cef9362d67b5d4ed98632aa9f63c689047ceff522638c5235d5ee134"} Feb 28 04:56:03 crc kubenswrapper[5014]: I0228 04:56:03.032074 5014 generic.go:334] "Generic (PLEG): container finished" podID="0c076180-9ead-4605-84e7-d0d920d19cdb" containerID="9cfe7cb9677c062ac380134d631692243311c8d628a166fce7c28671f1abec22" exitCode=0 Feb 28 04:56:03 crc kubenswrapper[5014]: I0228 04:56:03.032157 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" event={"ID":"0c076180-9ead-4605-84e7-d0d920d19cdb","Type":"ContainerDied","Data":"9cfe7cb9677c062ac380134d631692243311c8d628a166fce7c28671f1abec22"} Feb 28 04:56:03 crc kubenswrapper[5014]: I0228 04:56:03.032188 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" event={"ID":"0c076180-9ead-4605-84e7-d0d920d19cdb","Type":"ContainerStarted","Data":"090b73ae7121ff0e2e0e8babf5acf5c269491a77ba512cf383e5c628b9d76304"} Feb 28 04:56:03 crc kubenswrapper[5014]: I0228 04:56:03.509795 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 28 04:56:03 crc kubenswrapper[5014]: I0228 04:56:03.530263 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:56:03 crc kubenswrapper[5014]: I0228 04:56:03.530610 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7bfdbf3-8c37-4c43-b266-12b88843c085" containerName="ceilometer-central-agent" containerID="cri-o://23de5b50b110e5218b34964b174fecfb86edd0372b410dd3a6c653980146e08c" gracePeriod=30 Feb 28 04:56:03 crc kubenswrapper[5014]: I0228 04:56:03.530647 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7bfdbf3-8c37-4c43-b266-12b88843c085" containerName="proxy-httpd" containerID="cri-o://10db8c22833f304a2f59bc8ff84c619fe3713e740a5df5ccd2fff96e8c45de76" gracePeriod=30 Feb 28 04:56:03 crc kubenswrapper[5014]: I0228 04:56:03.530676 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7bfdbf3-8c37-4c43-b266-12b88843c085" containerName="ceilometer-notification-agent" containerID="cri-o://03ea790764897836fbdb9c9eb74b2b8951839c5345b065fcb8c6fdd614077fc9" gracePeriod=30 Feb 28 04:56:03 crc kubenswrapper[5014]: I0228 04:56:03.530688 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a7bfdbf3-8c37-4c43-b266-12b88843c085" containerName="sg-core" containerID="cri-o://59d96545945588c50d45f08d22daac6cae57921e63c4c77238d5c44759fd0b51" gracePeriod=30 Feb 28 04:56:04 crc kubenswrapper[5014]: I0228 04:56:04.043031 5014 generic.go:334] "Generic (PLEG): container finished" podID="a7bfdbf3-8c37-4c43-b266-12b88843c085" containerID="10db8c22833f304a2f59bc8ff84c619fe3713e740a5df5ccd2fff96e8c45de76" exitCode=0 Feb 28 04:56:04 crc kubenswrapper[5014]: I0228 04:56:04.043061 5014 generic.go:334] "Generic (PLEG): container finished" podID="a7bfdbf3-8c37-4c43-b266-12b88843c085" containerID="59d96545945588c50d45f08d22daac6cae57921e63c4c77238d5c44759fd0b51" exitCode=2 Feb 28 04:56:04 crc kubenswrapper[5014]: I0228 04:56:04.043072 5014 generic.go:334] "Generic (PLEG): container finished" podID="a7bfdbf3-8c37-4c43-b266-12b88843c085" containerID="23de5b50b110e5218b34964b174fecfb86edd0372b410dd3a6c653980146e08c" exitCode=0 Feb 28 04:56:04 crc kubenswrapper[5014]: I0228 04:56:04.043095 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7bfdbf3-8c37-4c43-b266-12b88843c085","Type":"ContainerDied","Data":"10db8c22833f304a2f59bc8ff84c619fe3713e740a5df5ccd2fff96e8c45de76"} Feb 28 04:56:04 crc kubenswrapper[5014]: I0228 04:56:04.043144 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7bfdbf3-8c37-4c43-b266-12b88843c085","Type":"ContainerDied","Data":"59d96545945588c50d45f08d22daac6cae57921e63c4c77238d5c44759fd0b51"} Feb 28 04:56:04 crc kubenswrapper[5014]: I0228 04:56:04.043156 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7bfdbf3-8c37-4c43-b266-12b88843c085","Type":"ContainerDied","Data":"23de5b50b110e5218b34964b174fecfb86edd0372b410dd3a6c653980146e08c"} Feb 28 04:56:04 crc kubenswrapper[5014]: I0228 04:56:04.044733 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" event={"ID":"0c076180-9ead-4605-84e7-d0d920d19cdb","Type":"ContainerStarted","Data":"45beb3fb190864d244f2e3d73b956fba32e04bcb19e0cb03a3e97a246d9047eb"} Feb 28 04:56:04 crc kubenswrapper[5014]: I0228 04:56:04.044974 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="eb100331-cd16-4875-8529-b7e34aaa385e" containerName="nova-api-log" containerID="cri-o://93a97422b33866e5002f8f0552056eaed6681f073abab421f58b2c3e47585e74" gracePeriod=30 Feb 28 04:56:04 crc kubenswrapper[5014]: I0228 04:56:04.045057 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="eb100331-cd16-4875-8529-b7e34aaa385e" containerName="nova-api-api" containerID="cri-o://303fad5cd0690422998c89528c459e9eda4272bb4391c49db99a8934dd4c22dc" gracePeriod=30 Feb 28 04:56:04 crc kubenswrapper[5014]: I0228 04:56:04.072535 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" podStartSLOduration=3.072517764 podStartE2EDuration="3.072517764s" podCreationTimestamp="2026-02-28 04:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:56:04.065373393 +0000 UTC m=+1352.735499313" watchObservedRunningTime="2026-02-28 04:56:04.072517764 +0000 UTC m=+1352.742643674" Feb 28 04:56:04 crc kubenswrapper[5014]: I0228 04:56:04.465756 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537576-mx47j" Feb 28 04:56:04 crc kubenswrapper[5014]: I0228 04:56:04.560298 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lv5g6\" (UniqueName: \"kubernetes.io/projected/a3702cbc-ce6b-4f93-9015-bd7cdc462025-kube-api-access-lv5g6\") pod \"a3702cbc-ce6b-4f93-9015-bd7cdc462025\" (UID: \"a3702cbc-ce6b-4f93-9015-bd7cdc462025\") " Feb 28 04:56:04 crc kubenswrapper[5014]: I0228 04:56:04.567542 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3702cbc-ce6b-4f93-9015-bd7cdc462025-kube-api-access-lv5g6" (OuterVolumeSpecName: "kube-api-access-lv5g6") pod "a3702cbc-ce6b-4f93-9015-bd7cdc462025" (UID: "a3702cbc-ce6b-4f93-9015-bd7cdc462025"). InnerVolumeSpecName "kube-api-access-lv5g6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:56:04 crc kubenswrapper[5014]: I0228 04:56:04.662279 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lv5g6\" (UniqueName: \"kubernetes.io/projected/a3702cbc-ce6b-4f93-9015-bd7cdc462025-kube-api-access-lv5g6\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:04 crc kubenswrapper[5014]: I0228 04:56:04.925045 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.058506 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537576-mx47j" event={"ID":"a3702cbc-ce6b-4f93-9015-bd7cdc462025","Type":"ContainerDied","Data":"6a175dc0fc1594d15b541673d17a2dc305822ba9035226512ff060b46943e3c1"} Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.058550 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a175dc0fc1594d15b541673d17a2dc305822ba9035226512ff060b46943e3c1" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.058602 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537576-mx47j" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.069493 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7bfdbf3-8c37-4c43-b266-12b88843c085-log-httpd\") pod \"a7bfdbf3-8c37-4c43-b266-12b88843c085\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.069575 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-sg-core-conf-yaml\") pod \"a7bfdbf3-8c37-4c43-b266-12b88843c085\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.069633 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-ceilometer-tls-certs\") pod \"a7bfdbf3-8c37-4c43-b266-12b88843c085\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.069659 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-config-data\") pod \"a7bfdbf3-8c37-4c43-b266-12b88843c085\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.069720 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-combined-ca-bundle\") pod \"a7bfdbf3-8c37-4c43-b266-12b88843c085\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.069769 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7bfdbf3-8c37-4c43-b266-12b88843c085-run-httpd\") pod \"a7bfdbf3-8c37-4c43-b266-12b88843c085\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.069896 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-scripts\") pod \"a7bfdbf3-8c37-4c43-b266-12b88843c085\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.069972 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8sk2m\" (UniqueName: \"kubernetes.io/projected/a7bfdbf3-8c37-4c43-b266-12b88843c085-kube-api-access-8sk2m\") pod \"a7bfdbf3-8c37-4c43-b266-12b88843c085\" (UID: \"a7bfdbf3-8c37-4c43-b266-12b88843c085\") " Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.070393 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7bfdbf3-8c37-4c43-b266-12b88843c085-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a7bfdbf3-8c37-4c43-b266-12b88843c085" (UID: "a7bfdbf3-8c37-4c43-b266-12b88843c085"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.070954 5014 generic.go:334] "Generic (PLEG): container finished" podID="a7bfdbf3-8c37-4c43-b266-12b88843c085" containerID="03ea790764897836fbdb9c9eb74b2b8951839c5345b065fcb8c6fdd614077fc9" exitCode=0 Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.071041 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7bfdbf3-8c37-4c43-b266-12b88843c085","Type":"ContainerDied","Data":"03ea790764897836fbdb9c9eb74b2b8951839c5345b065fcb8c6fdd614077fc9"} Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.071073 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a7bfdbf3-8c37-4c43-b266-12b88843c085","Type":"ContainerDied","Data":"d730cecd7d5c06937b6e71129fd2aa307370d084924f289fc668b8ea884a6017"} Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.071096 5014 scope.go:117] "RemoveContainer" containerID="10db8c22833f304a2f59bc8ff84c619fe3713e740a5df5ccd2fff96e8c45de76" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.071248 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.072249 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7bfdbf3-8c37-4c43-b266-12b88843c085-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a7bfdbf3-8c37-4c43-b266-12b88843c085" (UID: "a7bfdbf3-8c37-4c43-b266-12b88843c085"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.075982 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7bfdbf3-8c37-4c43-b266-12b88843c085-kube-api-access-8sk2m" (OuterVolumeSpecName: "kube-api-access-8sk2m") pod "a7bfdbf3-8c37-4c43-b266-12b88843c085" (UID: "a7bfdbf3-8c37-4c43-b266-12b88843c085"). InnerVolumeSpecName "kube-api-access-8sk2m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.076939 5014 generic.go:334] "Generic (PLEG): container finished" podID="eb100331-cd16-4875-8529-b7e34aaa385e" containerID="93a97422b33866e5002f8f0552056eaed6681f073abab421f58b2c3e47585e74" exitCode=143 Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.077036 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb100331-cd16-4875-8529-b7e34aaa385e","Type":"ContainerDied","Data":"93a97422b33866e5002f8f0552056eaed6681f073abab421f58b2c3e47585e74"} Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.077916 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.079156 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-scripts" (OuterVolumeSpecName: "scripts") pod "a7bfdbf3-8c37-4c43-b266-12b88843c085" (UID: "a7bfdbf3-8c37-4c43-b266-12b88843c085"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.121062 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a7bfdbf3-8c37-4c43-b266-12b88843c085" (UID: "a7bfdbf3-8c37-4c43-b266-12b88843c085"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.145762 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a7bfdbf3-8c37-4c43-b266-12b88843c085" (UID: "a7bfdbf3-8c37-4c43-b266-12b88843c085"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.174483 5014 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.174893 5014 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.174967 5014 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7bfdbf3-8c37-4c43-b266-12b88843c085-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.175024 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.175079 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8sk2m\" (UniqueName: \"kubernetes.io/projected/a7bfdbf3-8c37-4c43-b266-12b88843c085-kube-api-access-8sk2m\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.175142 5014 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a7bfdbf3-8c37-4c43-b266-12b88843c085-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.175947 5014 scope.go:117] "RemoveContainer" containerID="59d96545945588c50d45f08d22daac6cae57921e63c4c77238d5c44759fd0b51" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.192902 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-config-data" (OuterVolumeSpecName: "config-data") pod "a7bfdbf3-8c37-4c43-b266-12b88843c085" (UID: "a7bfdbf3-8c37-4c43-b266-12b88843c085"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.201374 5014 scope.go:117] "RemoveContainer" containerID="03ea790764897836fbdb9c9eb74b2b8951839c5345b065fcb8c6fdd614077fc9" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.222031 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7bfdbf3-8c37-4c43-b266-12b88843c085" (UID: "a7bfdbf3-8c37-4c43-b266-12b88843c085"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.231835 5014 scope.go:117] "RemoveContainer" containerID="23de5b50b110e5218b34964b174fecfb86edd0372b410dd3a6c653980146e08c" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.250740 5014 scope.go:117] "RemoveContainer" containerID="10db8c22833f304a2f59bc8ff84c619fe3713e740a5df5ccd2fff96e8c45de76" Feb 28 04:56:05 crc kubenswrapper[5014]: E0228 04:56:05.251270 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10db8c22833f304a2f59bc8ff84c619fe3713e740a5df5ccd2fff96e8c45de76\": container with ID starting with 10db8c22833f304a2f59bc8ff84c619fe3713e740a5df5ccd2fff96e8c45de76 not found: ID does not exist" containerID="10db8c22833f304a2f59bc8ff84c619fe3713e740a5df5ccd2fff96e8c45de76" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.251320 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10db8c22833f304a2f59bc8ff84c619fe3713e740a5df5ccd2fff96e8c45de76"} err="failed to get container status \"10db8c22833f304a2f59bc8ff84c619fe3713e740a5df5ccd2fff96e8c45de76\": rpc error: code = NotFound desc = could not find container \"10db8c22833f304a2f59bc8ff84c619fe3713e740a5df5ccd2fff96e8c45de76\": container with ID starting with 10db8c22833f304a2f59bc8ff84c619fe3713e740a5df5ccd2fff96e8c45de76 not found: ID does not exist" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.251351 5014 scope.go:117] "RemoveContainer" containerID="59d96545945588c50d45f08d22daac6cae57921e63c4c77238d5c44759fd0b51" Feb 28 04:56:05 crc kubenswrapper[5014]: E0228 04:56:05.251674 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"59d96545945588c50d45f08d22daac6cae57921e63c4c77238d5c44759fd0b51\": container with ID starting with 59d96545945588c50d45f08d22daac6cae57921e63c4c77238d5c44759fd0b51 not found: ID does not exist" containerID="59d96545945588c50d45f08d22daac6cae57921e63c4c77238d5c44759fd0b51" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.251780 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"59d96545945588c50d45f08d22daac6cae57921e63c4c77238d5c44759fd0b51"} err="failed to get container status \"59d96545945588c50d45f08d22daac6cae57921e63c4c77238d5c44759fd0b51\": rpc error: code = NotFound desc = could not find container \"59d96545945588c50d45f08d22daac6cae57921e63c4c77238d5c44759fd0b51\": container with ID starting with 59d96545945588c50d45f08d22daac6cae57921e63c4c77238d5c44759fd0b51 not found: ID does not exist" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.251889 5014 scope.go:117] "RemoveContainer" containerID="03ea790764897836fbdb9c9eb74b2b8951839c5345b065fcb8c6fdd614077fc9" Feb 28 04:56:05 crc kubenswrapper[5014]: E0228 04:56:05.252218 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03ea790764897836fbdb9c9eb74b2b8951839c5345b065fcb8c6fdd614077fc9\": container with ID starting with 03ea790764897836fbdb9c9eb74b2b8951839c5345b065fcb8c6fdd614077fc9 not found: ID does not exist" containerID="03ea790764897836fbdb9c9eb74b2b8951839c5345b065fcb8c6fdd614077fc9" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.252251 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03ea790764897836fbdb9c9eb74b2b8951839c5345b065fcb8c6fdd614077fc9"} err="failed to get container status \"03ea790764897836fbdb9c9eb74b2b8951839c5345b065fcb8c6fdd614077fc9\": rpc error: code = NotFound desc = could not find container \"03ea790764897836fbdb9c9eb74b2b8951839c5345b065fcb8c6fdd614077fc9\": container with ID starting with 03ea790764897836fbdb9c9eb74b2b8951839c5345b065fcb8c6fdd614077fc9 not found: ID does not exist" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.252269 5014 scope.go:117] "RemoveContainer" containerID="23de5b50b110e5218b34964b174fecfb86edd0372b410dd3a6c653980146e08c" Feb 28 04:56:05 crc kubenswrapper[5014]: E0228 04:56:05.252624 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23de5b50b110e5218b34964b174fecfb86edd0372b410dd3a6c653980146e08c\": container with ID starting with 23de5b50b110e5218b34964b174fecfb86edd0372b410dd3a6c653980146e08c not found: ID does not exist" containerID="23de5b50b110e5218b34964b174fecfb86edd0372b410dd3a6c653980146e08c" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.252652 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23de5b50b110e5218b34964b174fecfb86edd0372b410dd3a6c653980146e08c"} err="failed to get container status \"23de5b50b110e5218b34964b174fecfb86edd0372b410dd3a6c653980146e08c\": rpc error: code = NotFound desc = could not find container \"23de5b50b110e5218b34964b174fecfb86edd0372b410dd3a6c653980146e08c\": container with ID starting with 23de5b50b110e5218b34964b174fecfb86edd0372b410dd3a6c653980146e08c not found: ID does not exist" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.277043 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.277077 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7bfdbf3-8c37-4c43-b266-12b88843c085-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.411407 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.425153 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.437322 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:56:05 crc kubenswrapper[5014]: E0228 04:56:05.437978 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3702cbc-ce6b-4f93-9015-bd7cdc462025" containerName="oc" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.438114 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3702cbc-ce6b-4f93-9015-bd7cdc462025" containerName="oc" Feb 28 04:56:05 crc kubenswrapper[5014]: E0228 04:56:05.438185 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7bfdbf3-8c37-4c43-b266-12b88843c085" containerName="ceilometer-central-agent" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.438239 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7bfdbf3-8c37-4c43-b266-12b88843c085" containerName="ceilometer-central-agent" Feb 28 04:56:05 crc kubenswrapper[5014]: E0228 04:56:05.438311 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7bfdbf3-8c37-4c43-b266-12b88843c085" containerName="proxy-httpd" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.438365 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7bfdbf3-8c37-4c43-b266-12b88843c085" containerName="proxy-httpd" Feb 28 04:56:05 crc kubenswrapper[5014]: E0228 04:56:05.439077 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7bfdbf3-8c37-4c43-b266-12b88843c085" containerName="sg-core" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.439177 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7bfdbf3-8c37-4c43-b266-12b88843c085" containerName="sg-core" Feb 28 04:56:05 crc kubenswrapper[5014]: E0228 04:56:05.439302 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7bfdbf3-8c37-4c43-b266-12b88843c085" containerName="ceilometer-notification-agent" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.439366 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7bfdbf3-8c37-4c43-b266-12b88843c085" containerName="ceilometer-notification-agent" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.439668 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7bfdbf3-8c37-4c43-b266-12b88843c085" containerName="proxy-httpd" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.439734 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7bfdbf3-8c37-4c43-b266-12b88843c085" containerName="ceilometer-notification-agent" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.439795 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7bfdbf3-8c37-4c43-b266-12b88843c085" containerName="sg-core" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.439934 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3702cbc-ce6b-4f93-9015-bd7cdc462025" containerName="oc" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.440011 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7bfdbf3-8c37-4c43-b266-12b88843c085" containerName="ceilometer-central-agent" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.442357 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.444593 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.444772 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.446156 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.466355 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.529972 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537570-cjqpt"] Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.540593 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537570-cjqpt"] Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.568728 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:56:05 crc kubenswrapper[5014]: E0228 04:56:05.569533 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-lmwjm log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="09b5cedc-4fad-4be3-bf23-253532e33afa" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.582126 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-scripts\") pod \"ceilometer-0\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.582197 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.582232 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmwjm\" (UniqueName: \"kubernetes.io/projected/09b5cedc-4fad-4be3-bf23-253532e33afa-kube-api-access-lmwjm\") pod \"ceilometer-0\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.582286 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.582544 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-config-data\") pod \"ceilometer-0\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.582669 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09b5cedc-4fad-4be3-bf23-253532e33afa-log-httpd\") pod \"ceilometer-0\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.582710 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09b5cedc-4fad-4be3-bf23-253532e33afa-run-httpd\") pod \"ceilometer-0\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.582849 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.684728 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-scripts\") pod \"ceilometer-0\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.684858 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.684909 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmwjm\" (UniqueName: \"kubernetes.io/projected/09b5cedc-4fad-4be3-bf23-253532e33afa-kube-api-access-lmwjm\") pod \"ceilometer-0\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.684967 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.685027 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-config-data\") pod \"ceilometer-0\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.685071 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09b5cedc-4fad-4be3-bf23-253532e33afa-log-httpd\") pod \"ceilometer-0\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.685094 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09b5cedc-4fad-4be3-bf23-253532e33afa-run-httpd\") pod \"ceilometer-0\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.685136 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.686020 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09b5cedc-4fad-4be3-bf23-253532e33afa-log-httpd\") pod \"ceilometer-0\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.686056 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09b5cedc-4fad-4be3-bf23-253532e33afa-run-httpd\") pod \"ceilometer-0\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.689529 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.690022 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.691944 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.692494 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-config-data\") pod \"ceilometer-0\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.693540 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-scripts\") pod \"ceilometer-0\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " pod="openstack/ceilometer-0" Feb 28 04:56:05 crc kubenswrapper[5014]: I0228 04:56:05.707350 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmwjm\" (UniqueName: \"kubernetes.io/projected/09b5cedc-4fad-4be3-bf23-253532e33afa-kube-api-access-lmwjm\") pod \"ceilometer-0\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " pod="openstack/ceilometer-0" Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.090189 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.101128 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.190978 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7bfdbf3-8c37-4c43-b266-12b88843c085" path="/var/lib/kubelet/pods/a7bfdbf3-8c37-4c43-b266-12b88843c085/volumes" Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.192296 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d816e108-724b-47c0-a6a2-6499c9c56252" path="/var/lib/kubelet/pods/d816e108-724b-47c0-a6a2-6499c9c56252/volumes" Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.194294 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09b5cedc-4fad-4be3-bf23-253532e33afa-log-httpd\") pod \"09b5cedc-4fad-4be3-bf23-253532e33afa\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.194420 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09b5cedc-4fad-4be3-bf23-253532e33afa-run-httpd\") pod \"09b5cedc-4fad-4be3-bf23-253532e33afa\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.194542 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-ceilometer-tls-certs\") pod \"09b5cedc-4fad-4be3-bf23-253532e33afa\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.195123 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09b5cedc-4fad-4be3-bf23-253532e33afa-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "09b5cedc-4fad-4be3-bf23-253532e33afa" (UID: "09b5cedc-4fad-4be3-bf23-253532e33afa"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.195153 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09b5cedc-4fad-4be3-bf23-253532e33afa-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "09b5cedc-4fad-4be3-bf23-253532e33afa" (UID: "09b5cedc-4fad-4be3-bf23-253532e33afa"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.195134 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-sg-core-conf-yaml\") pod \"09b5cedc-4fad-4be3-bf23-253532e33afa\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.195305 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-combined-ca-bundle\") pod \"09b5cedc-4fad-4be3-bf23-253532e33afa\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.195447 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-config-data\") pod \"09b5cedc-4fad-4be3-bf23-253532e33afa\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.195493 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-scripts\") pod \"09b5cedc-4fad-4be3-bf23-253532e33afa\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.195546 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmwjm\" (UniqueName: \"kubernetes.io/projected/09b5cedc-4fad-4be3-bf23-253532e33afa-kube-api-access-lmwjm\") pod \"09b5cedc-4fad-4be3-bf23-253532e33afa\" (UID: \"09b5cedc-4fad-4be3-bf23-253532e33afa\") " Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.196458 5014 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09b5cedc-4fad-4be3-bf23-253532e33afa-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.196488 5014 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/09b5cedc-4fad-4be3-bf23-253532e33afa-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.215197 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-config-data" (OuterVolumeSpecName: "config-data") pod "09b5cedc-4fad-4be3-bf23-253532e33afa" (UID: "09b5cedc-4fad-4be3-bf23-253532e33afa"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.215232 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09b5cedc-4fad-4be3-bf23-253532e33afa" (UID: "09b5cedc-4fad-4be3-bf23-253532e33afa"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.215183 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "09b5cedc-4fad-4be3-bf23-253532e33afa" (UID: "09b5cedc-4fad-4be3-bf23-253532e33afa"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.215245 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09b5cedc-4fad-4be3-bf23-253532e33afa-kube-api-access-lmwjm" (OuterVolumeSpecName: "kube-api-access-lmwjm") pod "09b5cedc-4fad-4be3-bf23-253532e33afa" (UID: "09b5cedc-4fad-4be3-bf23-253532e33afa"). InnerVolumeSpecName "kube-api-access-lmwjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.215271 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-scripts" (OuterVolumeSpecName: "scripts") pod "09b5cedc-4fad-4be3-bf23-253532e33afa" (UID: "09b5cedc-4fad-4be3-bf23-253532e33afa"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.215312 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "09b5cedc-4fad-4be3-bf23-253532e33afa" (UID: "09b5cedc-4fad-4be3-bf23-253532e33afa"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.297716 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.298025 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.298036 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmwjm\" (UniqueName: \"kubernetes.io/projected/09b5cedc-4fad-4be3-bf23-253532e33afa-kube-api-access-lmwjm\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.298045 5014 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.298054 5014 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:06 crc kubenswrapper[5014]: I0228 04:56:06.298063 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09b5cedc-4fad-4be3-bf23-253532e33afa-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.098881 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.156651 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.173000 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.205631 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.208882 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.221433 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.222270 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.222451 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.232916 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.317144 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/522b8e6d-5531-4436-9c64-fadde40a77df-scripts\") pod \"ceilometer-0\" (UID: \"522b8e6d-5531-4436-9c64-fadde40a77df\") " pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.317201 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/522b8e6d-5531-4436-9c64-fadde40a77df-run-httpd\") pod \"ceilometer-0\" (UID: \"522b8e6d-5531-4436-9c64-fadde40a77df\") " pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.317231 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttfls\" (UniqueName: \"kubernetes.io/projected/522b8e6d-5531-4436-9c64-fadde40a77df-kube-api-access-ttfls\") pod \"ceilometer-0\" (UID: \"522b8e6d-5531-4436-9c64-fadde40a77df\") " pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.317278 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/522b8e6d-5531-4436-9c64-fadde40a77df-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"522b8e6d-5531-4436-9c64-fadde40a77df\") " pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.317304 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/522b8e6d-5531-4436-9c64-fadde40a77df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"522b8e6d-5531-4436-9c64-fadde40a77df\") " pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.317401 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/522b8e6d-5531-4436-9c64-fadde40a77df-config-data\") pod \"ceilometer-0\" (UID: \"522b8e6d-5531-4436-9c64-fadde40a77df\") " pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.317431 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522b8e6d-5531-4436-9c64-fadde40a77df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"522b8e6d-5531-4436-9c64-fadde40a77df\") " pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.317458 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/522b8e6d-5531-4436-9c64-fadde40a77df-log-httpd\") pod \"ceilometer-0\" (UID: \"522b8e6d-5531-4436-9c64-fadde40a77df\") " pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.419140 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/522b8e6d-5531-4436-9c64-fadde40a77df-config-data\") pod \"ceilometer-0\" (UID: \"522b8e6d-5531-4436-9c64-fadde40a77df\") " pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.419465 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522b8e6d-5531-4436-9c64-fadde40a77df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"522b8e6d-5531-4436-9c64-fadde40a77df\") " pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.419495 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/522b8e6d-5531-4436-9c64-fadde40a77df-log-httpd\") pod \"ceilometer-0\" (UID: \"522b8e6d-5531-4436-9c64-fadde40a77df\") " pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.419539 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/522b8e6d-5531-4436-9c64-fadde40a77df-scripts\") pod \"ceilometer-0\" (UID: \"522b8e6d-5531-4436-9c64-fadde40a77df\") " pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.419566 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/522b8e6d-5531-4436-9c64-fadde40a77df-run-httpd\") pod \"ceilometer-0\" (UID: \"522b8e6d-5531-4436-9c64-fadde40a77df\") " pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.419589 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttfls\" (UniqueName: \"kubernetes.io/projected/522b8e6d-5531-4436-9c64-fadde40a77df-kube-api-access-ttfls\") pod \"ceilometer-0\" (UID: \"522b8e6d-5531-4436-9c64-fadde40a77df\") " pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.419624 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/522b8e6d-5531-4436-9c64-fadde40a77df-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"522b8e6d-5531-4436-9c64-fadde40a77df\") " pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.419645 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/522b8e6d-5531-4436-9c64-fadde40a77df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"522b8e6d-5531-4436-9c64-fadde40a77df\") " pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.421291 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/522b8e6d-5531-4436-9c64-fadde40a77df-run-httpd\") pod \"ceilometer-0\" (UID: \"522b8e6d-5531-4436-9c64-fadde40a77df\") " pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.421390 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/522b8e6d-5531-4436-9c64-fadde40a77df-log-httpd\") pod \"ceilometer-0\" (UID: \"522b8e6d-5531-4436-9c64-fadde40a77df\") " pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.424004 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/522b8e6d-5531-4436-9c64-fadde40a77df-scripts\") pod \"ceilometer-0\" (UID: \"522b8e6d-5531-4436-9c64-fadde40a77df\") " pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.425627 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/522b8e6d-5531-4436-9c64-fadde40a77df-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"522b8e6d-5531-4436-9c64-fadde40a77df\") " pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.425672 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/522b8e6d-5531-4436-9c64-fadde40a77df-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"522b8e6d-5531-4436-9c64-fadde40a77df\") " pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.430089 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/522b8e6d-5531-4436-9c64-fadde40a77df-config-data\") pod \"ceilometer-0\" (UID: \"522b8e6d-5531-4436-9c64-fadde40a77df\") " pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.435453 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/522b8e6d-5531-4436-9c64-fadde40a77df-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"522b8e6d-5531-4436-9c64-fadde40a77df\") " pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.446884 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttfls\" (UniqueName: \"kubernetes.io/projected/522b8e6d-5531-4436-9c64-fadde40a77df-kube-api-access-ttfls\") pod \"ceilometer-0\" (UID: \"522b8e6d-5531-4436-9c64-fadde40a77df\") " pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.537640 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.604155 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.626395 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.648023 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.724627 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb100331-cd16-4875-8529-b7e34aaa385e-config-data\") pod \"eb100331-cd16-4875-8529-b7e34aaa385e\" (UID: \"eb100331-cd16-4875-8529-b7e34aaa385e\") " Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.725015 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb100331-cd16-4875-8529-b7e34aaa385e-logs\") pod \"eb100331-cd16-4875-8529-b7e34aaa385e\" (UID: \"eb100331-cd16-4875-8529-b7e34aaa385e\") " Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.725065 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56sfx\" (UniqueName: \"kubernetes.io/projected/eb100331-cd16-4875-8529-b7e34aaa385e-kube-api-access-56sfx\") pod \"eb100331-cd16-4875-8529-b7e34aaa385e\" (UID: \"eb100331-cd16-4875-8529-b7e34aaa385e\") " Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.725237 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb100331-cd16-4875-8529-b7e34aaa385e-combined-ca-bundle\") pod \"eb100331-cd16-4875-8529-b7e34aaa385e\" (UID: \"eb100331-cd16-4875-8529-b7e34aaa385e\") " Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.725515 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb100331-cd16-4875-8529-b7e34aaa385e-logs" (OuterVolumeSpecName: "logs") pod "eb100331-cd16-4875-8529-b7e34aaa385e" (UID: "eb100331-cd16-4875-8529-b7e34aaa385e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.726180 5014 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb100331-cd16-4875-8529-b7e34aaa385e-logs\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.729173 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb100331-cd16-4875-8529-b7e34aaa385e-kube-api-access-56sfx" (OuterVolumeSpecName: "kube-api-access-56sfx") pod "eb100331-cd16-4875-8529-b7e34aaa385e" (UID: "eb100331-cd16-4875-8529-b7e34aaa385e"). InnerVolumeSpecName "kube-api-access-56sfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.768782 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb100331-cd16-4875-8529-b7e34aaa385e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb100331-cd16-4875-8529-b7e34aaa385e" (UID: "eb100331-cd16-4875-8529-b7e34aaa385e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.774160 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb100331-cd16-4875-8529-b7e34aaa385e-config-data" (OuterVolumeSpecName: "config-data") pod "eb100331-cd16-4875-8529-b7e34aaa385e" (UID: "eb100331-cd16-4875-8529-b7e34aaa385e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.830234 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56sfx\" (UniqueName: \"kubernetes.io/projected/eb100331-cd16-4875-8529-b7e34aaa385e-kube-api-access-56sfx\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.830276 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb100331-cd16-4875-8529-b7e34aaa385e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:07 crc kubenswrapper[5014]: I0228 04:56:07.830289 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb100331-cd16-4875-8529-b7e34aaa385e-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.036611 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.110201 5014 generic.go:334] "Generic (PLEG): container finished" podID="eb100331-cd16-4875-8529-b7e34aaa385e" containerID="303fad5cd0690422998c89528c459e9eda4272bb4391c49db99a8934dd4c22dc" exitCode=0 Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.110265 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.110283 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb100331-cd16-4875-8529-b7e34aaa385e","Type":"ContainerDied","Data":"303fad5cd0690422998c89528c459e9eda4272bb4391c49db99a8934dd4c22dc"} Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.110640 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"eb100331-cd16-4875-8529-b7e34aaa385e","Type":"ContainerDied","Data":"299f46e0e88bdbd9c441b934951e44a3dbb86b922b399fb7bd8e2ce60debcbbe"} Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.110659 5014 scope.go:117] "RemoveContainer" containerID="303fad5cd0690422998c89528c459e9eda4272bb4391c49db99a8934dd4c22dc" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.113301 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"522b8e6d-5531-4436-9c64-fadde40a77df","Type":"ContainerStarted","Data":"74f1d367cc82cd42ebc670821be1773d9ad51cf0f467ae766b96d1662d91b57d"} Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.132178 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.135429 5014 scope.go:117] "RemoveContainer" containerID="93a97422b33866e5002f8f0552056eaed6681f073abab421f58b2c3e47585e74" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.151183 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.159481 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.188174 5014 scope.go:117] "RemoveContainer" containerID="303fad5cd0690422998c89528c459e9eda4272bb4391c49db99a8934dd4c22dc" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.188295 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09b5cedc-4fad-4be3-bf23-253532e33afa" path="/var/lib/kubelet/pods/09b5cedc-4fad-4be3-bf23-253532e33afa/volumes" Feb 28 04:56:08 crc kubenswrapper[5014]: E0228 04:56:08.188690 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"303fad5cd0690422998c89528c459e9eda4272bb4391c49db99a8934dd4c22dc\": container with ID starting with 303fad5cd0690422998c89528c459e9eda4272bb4391c49db99a8934dd4c22dc not found: ID does not exist" containerID="303fad5cd0690422998c89528c459e9eda4272bb4391c49db99a8934dd4c22dc" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.188722 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"303fad5cd0690422998c89528c459e9eda4272bb4391c49db99a8934dd4c22dc"} err="failed to get container status \"303fad5cd0690422998c89528c459e9eda4272bb4391c49db99a8934dd4c22dc\": rpc error: code = NotFound desc = could not find container \"303fad5cd0690422998c89528c459e9eda4272bb4391c49db99a8934dd4c22dc\": container with ID starting with 303fad5cd0690422998c89528c459e9eda4272bb4391c49db99a8934dd4c22dc not found: ID does not exist" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.188741 5014 scope.go:117] "RemoveContainer" containerID="93a97422b33866e5002f8f0552056eaed6681f073abab421f58b2c3e47585e74" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.188871 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb100331-cd16-4875-8529-b7e34aaa385e" path="/var/lib/kubelet/pods/eb100331-cd16-4875-8529-b7e34aaa385e/volumes" Feb 28 04:56:08 crc kubenswrapper[5014]: E0228 04:56:08.189346 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93a97422b33866e5002f8f0552056eaed6681f073abab421f58b2c3e47585e74\": container with ID starting with 93a97422b33866e5002f8f0552056eaed6681f073abab421f58b2c3e47585e74 not found: ID does not exist" containerID="93a97422b33866e5002f8f0552056eaed6681f073abab421f58b2c3e47585e74" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.189371 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93a97422b33866e5002f8f0552056eaed6681f073abab421f58b2c3e47585e74"} err="failed to get container status \"93a97422b33866e5002f8f0552056eaed6681f073abab421f58b2c3e47585e74\": rpc error: code = NotFound desc = could not find container \"93a97422b33866e5002f8f0552056eaed6681f073abab421f58b2c3e47585e74\": container with ID starting with 93a97422b33866e5002f8f0552056eaed6681f073abab421f58b2c3e47585e74 not found: ID does not exist" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.191923 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 28 04:56:08 crc kubenswrapper[5014]: E0228 04:56:08.192357 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb100331-cd16-4875-8529-b7e34aaa385e" containerName="nova-api-log" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.192378 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb100331-cd16-4875-8529-b7e34aaa385e" containerName="nova-api-log" Feb 28 04:56:08 crc kubenswrapper[5014]: E0228 04:56:08.192394 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb100331-cd16-4875-8529-b7e34aaa385e" containerName="nova-api-api" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.192401 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb100331-cd16-4875-8529-b7e34aaa385e" containerName="nova-api-api" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.192582 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb100331-cd16-4875-8529-b7e34aaa385e" containerName="nova-api-api" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.192600 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb100331-cd16-4875-8529-b7e34aaa385e" containerName="nova-api-log" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.193602 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.195651 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.195991 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.196133 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.238359 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.337739 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ad5798-0340-4e23-947d-4b2ca7cc0895-public-tls-certs\") pod \"nova-api-0\" (UID: \"46ad5798-0340-4e23-947d-4b2ca7cc0895\") " pod="openstack/nova-api-0" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.337794 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ad5798-0340-4e23-947d-4b2ca7cc0895-config-data\") pod \"nova-api-0\" (UID: \"46ad5798-0340-4e23-947d-4b2ca7cc0895\") " pod="openstack/nova-api-0" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.338001 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46ad5798-0340-4e23-947d-4b2ca7cc0895-logs\") pod \"nova-api-0\" (UID: \"46ad5798-0340-4e23-947d-4b2ca7cc0895\") " pod="openstack/nova-api-0" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.338079 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ad5798-0340-4e23-947d-4b2ca7cc0895-internal-tls-certs\") pod \"nova-api-0\" (UID: \"46ad5798-0340-4e23-947d-4b2ca7cc0895\") " pod="openstack/nova-api-0" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.338309 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ad5798-0340-4e23-947d-4b2ca7cc0895-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"46ad5798-0340-4e23-947d-4b2ca7cc0895\") " pod="openstack/nova-api-0" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.338391 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plnrj\" (UniqueName: \"kubernetes.io/projected/46ad5798-0340-4e23-947d-4b2ca7cc0895-kube-api-access-plnrj\") pod \"nova-api-0\" (UID: \"46ad5798-0340-4e23-947d-4b2ca7cc0895\") " pod="openstack/nova-api-0" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.391411 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-xz9gg"] Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.393056 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xz9gg" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.394905 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.396762 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.401901 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xz9gg"] Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.440234 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ad5798-0340-4e23-947d-4b2ca7cc0895-config-data\") pod \"nova-api-0\" (UID: \"46ad5798-0340-4e23-947d-4b2ca7cc0895\") " pod="openstack/nova-api-0" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.440335 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47e5f3ce-9596-4be2-a8e1-363a7abd090f-scripts\") pod \"nova-cell1-cell-mapping-xz9gg\" (UID: \"47e5f3ce-9596-4be2-a8e1-363a7abd090f\") " pod="openstack/nova-cell1-cell-mapping-xz9gg" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.440360 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47e5f3ce-9596-4be2-a8e1-363a7abd090f-config-data\") pod \"nova-cell1-cell-mapping-xz9gg\" (UID: \"47e5f3ce-9596-4be2-a8e1-363a7abd090f\") " pod="openstack/nova-cell1-cell-mapping-xz9gg" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.441771 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46ad5798-0340-4e23-947d-4b2ca7cc0895-logs\") pod \"nova-api-0\" (UID: \"46ad5798-0340-4e23-947d-4b2ca7cc0895\") " pod="openstack/nova-api-0" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.444674 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46ad5798-0340-4e23-947d-4b2ca7cc0895-logs\") pod \"nova-api-0\" (UID: \"46ad5798-0340-4e23-947d-4b2ca7cc0895\") " pod="openstack/nova-api-0" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.444887 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ad5798-0340-4e23-947d-4b2ca7cc0895-internal-tls-certs\") pod \"nova-api-0\" (UID: \"46ad5798-0340-4e23-947d-4b2ca7cc0895\") " pod="openstack/nova-api-0" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.445006 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47e5f3ce-9596-4be2-a8e1-363a7abd090f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xz9gg\" (UID: \"47e5f3ce-9596-4be2-a8e1-363a7abd090f\") " pod="openstack/nova-cell1-cell-mapping-xz9gg" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.445168 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ad5798-0340-4e23-947d-4b2ca7cc0895-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"46ad5798-0340-4e23-947d-4b2ca7cc0895\") " pod="openstack/nova-api-0" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.445294 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plnrj\" (UniqueName: \"kubernetes.io/projected/46ad5798-0340-4e23-947d-4b2ca7cc0895-kube-api-access-plnrj\") pod \"nova-api-0\" (UID: \"46ad5798-0340-4e23-947d-4b2ca7cc0895\") " pod="openstack/nova-api-0" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.445994 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xsbw\" (UniqueName: \"kubernetes.io/projected/47e5f3ce-9596-4be2-a8e1-363a7abd090f-kube-api-access-9xsbw\") pod \"nova-cell1-cell-mapping-xz9gg\" (UID: \"47e5f3ce-9596-4be2-a8e1-363a7abd090f\") " pod="openstack/nova-cell1-cell-mapping-xz9gg" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.446134 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ad5798-0340-4e23-947d-4b2ca7cc0895-public-tls-certs\") pod \"nova-api-0\" (UID: \"46ad5798-0340-4e23-947d-4b2ca7cc0895\") " pod="openstack/nova-api-0" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.455452 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ad5798-0340-4e23-947d-4b2ca7cc0895-internal-tls-certs\") pod \"nova-api-0\" (UID: \"46ad5798-0340-4e23-947d-4b2ca7cc0895\") " pod="openstack/nova-api-0" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.461359 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ad5798-0340-4e23-947d-4b2ca7cc0895-public-tls-certs\") pod \"nova-api-0\" (UID: \"46ad5798-0340-4e23-947d-4b2ca7cc0895\") " pod="openstack/nova-api-0" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.461904 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ad5798-0340-4e23-947d-4b2ca7cc0895-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"46ad5798-0340-4e23-947d-4b2ca7cc0895\") " pod="openstack/nova-api-0" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.462620 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ad5798-0340-4e23-947d-4b2ca7cc0895-config-data\") pod \"nova-api-0\" (UID: \"46ad5798-0340-4e23-947d-4b2ca7cc0895\") " pod="openstack/nova-api-0" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.465216 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plnrj\" (UniqueName: \"kubernetes.io/projected/46ad5798-0340-4e23-947d-4b2ca7cc0895-kube-api-access-plnrj\") pod \"nova-api-0\" (UID: \"46ad5798-0340-4e23-947d-4b2ca7cc0895\") " pod="openstack/nova-api-0" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.521466 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.547261 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47e5f3ce-9596-4be2-a8e1-363a7abd090f-scripts\") pod \"nova-cell1-cell-mapping-xz9gg\" (UID: \"47e5f3ce-9596-4be2-a8e1-363a7abd090f\") " pod="openstack/nova-cell1-cell-mapping-xz9gg" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.547315 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47e5f3ce-9596-4be2-a8e1-363a7abd090f-config-data\") pod \"nova-cell1-cell-mapping-xz9gg\" (UID: \"47e5f3ce-9596-4be2-a8e1-363a7abd090f\") " pod="openstack/nova-cell1-cell-mapping-xz9gg" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.547378 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47e5f3ce-9596-4be2-a8e1-363a7abd090f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xz9gg\" (UID: \"47e5f3ce-9596-4be2-a8e1-363a7abd090f\") " pod="openstack/nova-cell1-cell-mapping-xz9gg" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.547454 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xsbw\" (UniqueName: \"kubernetes.io/projected/47e5f3ce-9596-4be2-a8e1-363a7abd090f-kube-api-access-9xsbw\") pod \"nova-cell1-cell-mapping-xz9gg\" (UID: \"47e5f3ce-9596-4be2-a8e1-363a7abd090f\") " pod="openstack/nova-cell1-cell-mapping-xz9gg" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.553015 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47e5f3ce-9596-4be2-a8e1-363a7abd090f-scripts\") pod \"nova-cell1-cell-mapping-xz9gg\" (UID: \"47e5f3ce-9596-4be2-a8e1-363a7abd090f\") " pod="openstack/nova-cell1-cell-mapping-xz9gg" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.558863 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47e5f3ce-9596-4be2-a8e1-363a7abd090f-config-data\") pod \"nova-cell1-cell-mapping-xz9gg\" (UID: \"47e5f3ce-9596-4be2-a8e1-363a7abd090f\") " pod="openstack/nova-cell1-cell-mapping-xz9gg" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.561744 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47e5f3ce-9596-4be2-a8e1-363a7abd090f-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-xz9gg\" (UID: \"47e5f3ce-9596-4be2-a8e1-363a7abd090f\") " pod="openstack/nova-cell1-cell-mapping-xz9gg" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.576754 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xsbw\" (UniqueName: \"kubernetes.io/projected/47e5f3ce-9596-4be2-a8e1-363a7abd090f-kube-api-access-9xsbw\") pod \"nova-cell1-cell-mapping-xz9gg\" (UID: \"47e5f3ce-9596-4be2-a8e1-363a7abd090f\") " pod="openstack/nova-cell1-cell-mapping-xz9gg" Feb 28 04:56:08 crc kubenswrapper[5014]: I0228 04:56:08.746293 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xz9gg" Feb 28 04:56:09 crc kubenswrapper[5014]: I0228 04:56:09.019368 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 04:56:09 crc kubenswrapper[5014]: I0228 04:56:09.129702 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"522b8e6d-5531-4436-9c64-fadde40a77df","Type":"ContainerStarted","Data":"ce752df55e557abfd7ed12b931609f0e5d393df031b5c4436b26572f230f8c9d"} Feb 28 04:56:09 crc kubenswrapper[5014]: I0228 04:56:09.131289 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"46ad5798-0340-4e23-947d-4b2ca7cc0895","Type":"ContainerStarted","Data":"114289a278caf0b83de3b8f05a25e09fd67679a476f52b8d76abffa826c996b6"} Feb 28 04:56:09 crc kubenswrapper[5014]: I0228 04:56:09.171496 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-xz9gg"] Feb 28 04:56:10 crc kubenswrapper[5014]: I0228 04:56:10.146529 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"522b8e6d-5531-4436-9c64-fadde40a77df","Type":"ContainerStarted","Data":"0dba5b288223a70765cdb6eb09beaefad4dca09ce64214059b6861501755a715"} Feb 28 04:56:10 crc kubenswrapper[5014]: I0228 04:56:10.150360 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xz9gg" event={"ID":"47e5f3ce-9596-4be2-a8e1-363a7abd090f","Type":"ContainerStarted","Data":"945a4a6d7e42e896cfa5eed88c95b74d1e4eba29597b63eb863f7e55fb09e0ae"} Feb 28 04:56:10 crc kubenswrapper[5014]: I0228 04:56:10.150405 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xz9gg" event={"ID":"47e5f3ce-9596-4be2-a8e1-363a7abd090f","Type":"ContainerStarted","Data":"6d2f2b79bbb8162009071d0a40d687231f1f9a506d962791e03ebadd6c5f0b90"} Feb 28 04:56:10 crc kubenswrapper[5014]: I0228 04:56:10.156634 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"46ad5798-0340-4e23-947d-4b2ca7cc0895","Type":"ContainerStarted","Data":"b753d3c88e6a6ecb07f5a83db2f280661a60c0cd18ad20379e53df836517cee7"} Feb 28 04:56:10 crc kubenswrapper[5014]: I0228 04:56:10.156676 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"46ad5798-0340-4e23-947d-4b2ca7cc0895","Type":"ContainerStarted","Data":"5739c3103b0bdada5e78f52af49fc46ecf19deff1684d6246b059f935abe7a5c"} Feb 28 04:56:10 crc kubenswrapper[5014]: I0228 04:56:10.170257 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-xz9gg" podStartSLOduration=2.170237249 podStartE2EDuration="2.170237249s" podCreationTimestamp="2026-02-28 04:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:56:10.163795396 +0000 UTC m=+1358.833921306" watchObservedRunningTime="2026-02-28 04:56:10.170237249 +0000 UTC m=+1358.840363159" Feb 28 04:56:10 crc kubenswrapper[5014]: I0228 04:56:10.189862 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.189843875 podStartE2EDuration="2.189843875s" podCreationTimestamp="2026-02-28 04:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:56:10.187712778 +0000 UTC m=+1358.857838678" watchObservedRunningTime="2026-02-28 04:56:10.189843875 +0000 UTC m=+1358.859969785" Feb 28 04:56:11 crc kubenswrapper[5014]: I0228 04:56:11.168971 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"522b8e6d-5531-4436-9c64-fadde40a77df","Type":"ContainerStarted","Data":"22c3c36d3967533b330f33e168eafb8af865bfa2c4d370d092e04e053b6fbd6e"} Feb 28 04:56:11 crc kubenswrapper[5014]: I0228 04:56:11.528427 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" Feb 28 04:56:11 crc kubenswrapper[5014]: I0228 04:56:11.595792 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-b7gl9"] Feb 28 04:56:11 crc kubenswrapper[5014]: I0228 04:56:11.596031 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" podUID="facf8396-8625-4f68-9167-be011dd01a6b" containerName="dnsmasq-dns" containerID="cri-o://1a95e2c1e3d8200dba02f5832879431e99ebff1b2dd2e907fb6f71067b755341" gracePeriod=10 Feb 28 04:56:11 crc kubenswrapper[5014]: I0228 04:56:11.807380 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" podUID="facf8396-8625-4f68-9167-be011dd01a6b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.188:5353: connect: connection refused" Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.124489 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.183424 5014 generic.go:334] "Generic (PLEG): container finished" podID="facf8396-8625-4f68-9167-be011dd01a6b" containerID="1a95e2c1e3d8200dba02f5832879431e99ebff1b2dd2e907fb6f71067b755341" exitCode=0 Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.184876 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.187997 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" event={"ID":"facf8396-8625-4f68-9167-be011dd01a6b","Type":"ContainerDied","Data":"1a95e2c1e3d8200dba02f5832879431e99ebff1b2dd2e907fb6f71067b755341"} Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.188044 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-757b4f8459-b7gl9" event={"ID":"facf8396-8625-4f68-9167-be011dd01a6b","Type":"ContainerDied","Data":"9ea617ac903d25bf6ec72f60bc879bed2eba89c58ba6aaab45b57aeddd454bf1"} Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.188067 5014 scope.go:117] "RemoveContainer" containerID="1a95e2c1e3d8200dba02f5832879431e99ebff1b2dd2e907fb6f71067b755341" Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.219070 5014 scope.go:117] "RemoveContainer" containerID="5ccb2505a1e0aed9b6ef3d2cac84886e163e740d64e7ed4e0c9b8efc2be11d2c" Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.232717 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-config\") pod \"facf8396-8625-4f68-9167-be011dd01a6b\" (UID: \"facf8396-8625-4f68-9167-be011dd01a6b\") " Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.232867 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6gl9\" (UniqueName: \"kubernetes.io/projected/facf8396-8625-4f68-9167-be011dd01a6b-kube-api-access-t6gl9\") pod \"facf8396-8625-4f68-9167-be011dd01a6b\" (UID: \"facf8396-8625-4f68-9167-be011dd01a6b\") " Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.232909 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-dns-swift-storage-0\") pod \"facf8396-8625-4f68-9167-be011dd01a6b\" (UID: \"facf8396-8625-4f68-9167-be011dd01a6b\") " Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.232949 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-ovsdbserver-nb\") pod \"facf8396-8625-4f68-9167-be011dd01a6b\" (UID: \"facf8396-8625-4f68-9167-be011dd01a6b\") " Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.233040 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-dns-svc\") pod \"facf8396-8625-4f68-9167-be011dd01a6b\" (UID: \"facf8396-8625-4f68-9167-be011dd01a6b\") " Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.233131 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-ovsdbserver-sb\") pod \"facf8396-8625-4f68-9167-be011dd01a6b\" (UID: \"facf8396-8625-4f68-9167-be011dd01a6b\") " Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.247450 5014 scope.go:117] "RemoveContainer" containerID="1a95e2c1e3d8200dba02f5832879431e99ebff1b2dd2e907fb6f71067b755341" Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.247668 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/facf8396-8625-4f68-9167-be011dd01a6b-kube-api-access-t6gl9" (OuterVolumeSpecName: "kube-api-access-t6gl9") pod "facf8396-8625-4f68-9167-be011dd01a6b" (UID: "facf8396-8625-4f68-9167-be011dd01a6b"). InnerVolumeSpecName "kube-api-access-t6gl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:56:12 crc kubenswrapper[5014]: E0228 04:56:12.249725 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a95e2c1e3d8200dba02f5832879431e99ebff1b2dd2e907fb6f71067b755341\": container with ID starting with 1a95e2c1e3d8200dba02f5832879431e99ebff1b2dd2e907fb6f71067b755341 not found: ID does not exist" containerID="1a95e2c1e3d8200dba02f5832879431e99ebff1b2dd2e907fb6f71067b755341" Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.249773 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a95e2c1e3d8200dba02f5832879431e99ebff1b2dd2e907fb6f71067b755341"} err="failed to get container status \"1a95e2c1e3d8200dba02f5832879431e99ebff1b2dd2e907fb6f71067b755341\": rpc error: code = NotFound desc = could not find container \"1a95e2c1e3d8200dba02f5832879431e99ebff1b2dd2e907fb6f71067b755341\": container with ID starting with 1a95e2c1e3d8200dba02f5832879431e99ebff1b2dd2e907fb6f71067b755341 not found: ID does not exist" Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.251446 5014 scope.go:117] "RemoveContainer" containerID="5ccb2505a1e0aed9b6ef3d2cac84886e163e740d64e7ed4e0c9b8efc2be11d2c" Feb 28 04:56:12 crc kubenswrapper[5014]: E0228 04:56:12.252026 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ccb2505a1e0aed9b6ef3d2cac84886e163e740d64e7ed4e0c9b8efc2be11d2c\": container with ID starting with 5ccb2505a1e0aed9b6ef3d2cac84886e163e740d64e7ed4e0c9b8efc2be11d2c not found: ID does not exist" containerID="5ccb2505a1e0aed9b6ef3d2cac84886e163e740d64e7ed4e0c9b8efc2be11d2c" Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.252056 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ccb2505a1e0aed9b6ef3d2cac84886e163e740d64e7ed4e0c9b8efc2be11d2c"} err="failed to get container status \"5ccb2505a1e0aed9b6ef3d2cac84886e163e740d64e7ed4e0c9b8efc2be11d2c\": rpc error: code = NotFound desc = could not find container \"5ccb2505a1e0aed9b6ef3d2cac84886e163e740d64e7ed4e0c9b8efc2be11d2c\": container with ID starting with 5ccb2505a1e0aed9b6ef3d2cac84886e163e740d64e7ed4e0c9b8efc2be11d2c not found: ID does not exist" Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.286093 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "facf8396-8625-4f68-9167-be011dd01a6b" (UID: "facf8396-8625-4f68-9167-be011dd01a6b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.291735 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "facf8396-8625-4f68-9167-be011dd01a6b" (UID: "facf8396-8625-4f68-9167-be011dd01a6b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.294683 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "facf8396-8625-4f68-9167-be011dd01a6b" (UID: "facf8396-8625-4f68-9167-be011dd01a6b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.306090 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-config" (OuterVolumeSpecName: "config") pod "facf8396-8625-4f68-9167-be011dd01a6b" (UID: "facf8396-8625-4f68-9167-be011dd01a6b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.322582 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "facf8396-8625-4f68-9167-be011dd01a6b" (UID: "facf8396-8625-4f68-9167-be011dd01a6b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.335401 5014 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.336322 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.336558 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.336630 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6gl9\" (UniqueName: \"kubernetes.io/projected/facf8396-8625-4f68-9167-be011dd01a6b-kube-api-access-t6gl9\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.336711 5014 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.336797 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/facf8396-8625-4f68-9167-be011dd01a6b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.530582 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-b7gl9"] Feb 28 04:56:12 crc kubenswrapper[5014]: I0228 04:56:12.569641 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-757b4f8459-b7gl9"] Feb 28 04:56:13 crc kubenswrapper[5014]: I0228 04:56:13.195730 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"522b8e6d-5531-4436-9c64-fadde40a77df","Type":"ContainerStarted","Data":"194ebe2699f1e6c763488235779faa5fdc6c0067b0597535281e4a4777f6e2e9"} Feb 28 04:56:13 crc kubenswrapper[5014]: I0228 04:56:13.196006 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 28 04:56:13 crc kubenswrapper[5014]: I0228 04:56:13.216157 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.95585661 podStartE2EDuration="6.216142277s" podCreationTimestamp="2026-02-28 04:56:07 +0000 UTC" firstStartedPulling="2026-02-28 04:56:08.045779398 +0000 UTC m=+1356.715905308" lastFinishedPulling="2026-02-28 04:56:12.306065065 +0000 UTC m=+1360.976190975" observedRunningTime="2026-02-28 04:56:13.213393553 +0000 UTC m=+1361.883519483" watchObservedRunningTime="2026-02-28 04:56:13.216142277 +0000 UTC m=+1361.886268187" Feb 28 04:56:14 crc kubenswrapper[5014]: I0228 04:56:14.184358 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="facf8396-8625-4f68-9167-be011dd01a6b" path="/var/lib/kubelet/pods/facf8396-8625-4f68-9167-be011dd01a6b/volumes" Feb 28 04:56:15 crc kubenswrapper[5014]: I0228 04:56:15.218873 5014 generic.go:334] "Generic (PLEG): container finished" podID="47e5f3ce-9596-4be2-a8e1-363a7abd090f" containerID="945a4a6d7e42e896cfa5eed88c95b74d1e4eba29597b63eb863f7e55fb09e0ae" exitCode=0 Feb 28 04:56:15 crc kubenswrapper[5014]: I0228 04:56:15.218965 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xz9gg" event={"ID":"47e5f3ce-9596-4be2-a8e1-363a7abd090f","Type":"ContainerDied","Data":"945a4a6d7e42e896cfa5eed88c95b74d1e4eba29597b63eb863f7e55fb09e0ae"} Feb 28 04:56:16 crc kubenswrapper[5014]: I0228 04:56:16.702240 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xz9gg" Feb 28 04:56:16 crc kubenswrapper[5014]: I0228 04:56:16.823684 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47e5f3ce-9596-4be2-a8e1-363a7abd090f-scripts\") pod \"47e5f3ce-9596-4be2-a8e1-363a7abd090f\" (UID: \"47e5f3ce-9596-4be2-a8e1-363a7abd090f\") " Feb 28 04:56:16 crc kubenswrapper[5014]: I0228 04:56:16.823751 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xsbw\" (UniqueName: \"kubernetes.io/projected/47e5f3ce-9596-4be2-a8e1-363a7abd090f-kube-api-access-9xsbw\") pod \"47e5f3ce-9596-4be2-a8e1-363a7abd090f\" (UID: \"47e5f3ce-9596-4be2-a8e1-363a7abd090f\") " Feb 28 04:56:16 crc kubenswrapper[5014]: I0228 04:56:16.823778 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47e5f3ce-9596-4be2-a8e1-363a7abd090f-config-data\") pod \"47e5f3ce-9596-4be2-a8e1-363a7abd090f\" (UID: \"47e5f3ce-9596-4be2-a8e1-363a7abd090f\") " Feb 28 04:56:16 crc kubenswrapper[5014]: I0228 04:56:16.823963 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47e5f3ce-9596-4be2-a8e1-363a7abd090f-combined-ca-bundle\") pod \"47e5f3ce-9596-4be2-a8e1-363a7abd090f\" (UID: \"47e5f3ce-9596-4be2-a8e1-363a7abd090f\") " Feb 28 04:56:16 crc kubenswrapper[5014]: I0228 04:56:16.829962 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47e5f3ce-9596-4be2-a8e1-363a7abd090f-kube-api-access-9xsbw" (OuterVolumeSpecName: "kube-api-access-9xsbw") pod "47e5f3ce-9596-4be2-a8e1-363a7abd090f" (UID: "47e5f3ce-9596-4be2-a8e1-363a7abd090f"). InnerVolumeSpecName "kube-api-access-9xsbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:56:16 crc kubenswrapper[5014]: I0228 04:56:16.831263 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47e5f3ce-9596-4be2-a8e1-363a7abd090f-scripts" (OuterVolumeSpecName: "scripts") pod "47e5f3ce-9596-4be2-a8e1-363a7abd090f" (UID: "47e5f3ce-9596-4be2-a8e1-363a7abd090f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:56:16 crc kubenswrapper[5014]: I0228 04:56:16.857687 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47e5f3ce-9596-4be2-a8e1-363a7abd090f-config-data" (OuterVolumeSpecName: "config-data") pod "47e5f3ce-9596-4be2-a8e1-363a7abd090f" (UID: "47e5f3ce-9596-4be2-a8e1-363a7abd090f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:56:16 crc kubenswrapper[5014]: I0228 04:56:16.858381 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47e5f3ce-9596-4be2-a8e1-363a7abd090f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47e5f3ce-9596-4be2-a8e1-363a7abd090f" (UID: "47e5f3ce-9596-4be2-a8e1-363a7abd090f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:56:16 crc kubenswrapper[5014]: I0228 04:56:16.928024 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47e5f3ce-9596-4be2-a8e1-363a7abd090f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:16 crc kubenswrapper[5014]: I0228 04:56:16.928060 5014 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/47e5f3ce-9596-4be2-a8e1-363a7abd090f-scripts\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:16 crc kubenswrapper[5014]: I0228 04:56:16.928069 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xsbw\" (UniqueName: \"kubernetes.io/projected/47e5f3ce-9596-4be2-a8e1-363a7abd090f-kube-api-access-9xsbw\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:16 crc kubenswrapper[5014]: I0228 04:56:16.928081 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47e5f3ce-9596-4be2-a8e1-363a7abd090f-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:17 crc kubenswrapper[5014]: I0228 04:56:17.243182 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-xz9gg" event={"ID":"47e5f3ce-9596-4be2-a8e1-363a7abd090f","Type":"ContainerDied","Data":"6d2f2b79bbb8162009071d0a40d687231f1f9a506d962791e03ebadd6c5f0b90"} Feb 28 04:56:17 crc kubenswrapper[5014]: I0228 04:56:17.243499 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d2f2b79bbb8162009071d0a40d687231f1f9a506d962791e03ebadd6c5f0b90" Feb 28 04:56:17 crc kubenswrapper[5014]: I0228 04:56:17.243257 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-xz9gg" Feb 28 04:56:17 crc kubenswrapper[5014]: I0228 04:56:17.442193 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 28 04:56:17 crc kubenswrapper[5014]: I0228 04:56:17.442500 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="46ad5798-0340-4e23-947d-4b2ca7cc0895" containerName="nova-api-log" containerID="cri-o://5739c3103b0bdada5e78f52af49fc46ecf19deff1684d6246b059f935abe7a5c" gracePeriod=30 Feb 28 04:56:17 crc kubenswrapper[5014]: I0228 04:56:17.442584 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="46ad5798-0340-4e23-947d-4b2ca7cc0895" containerName="nova-api-api" containerID="cri-o://b753d3c88e6a6ecb07f5a83db2f280661a60c0cd18ad20379e53df836517cee7" gracePeriod=30 Feb 28 04:56:17 crc kubenswrapper[5014]: I0228 04:56:17.459257 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 04:56:17 crc kubenswrapper[5014]: I0228 04:56:17.459521 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="2b76306f-13bd-4df4-a8a2-d3a1eede7020" containerName="nova-scheduler-scheduler" containerID="cri-o://3336e496711e2ad799de320a9cb7fcdfc3046fdddf17c6e42eb0becbe91e7955" gracePeriod=30 Feb 28 04:56:17 crc kubenswrapper[5014]: I0228 04:56:17.588899 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 04:56:17 crc kubenswrapper[5014]: I0228 04:56:17.589125 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6" containerName="nova-metadata-log" containerID="cri-o://98fe048c4c0aba06e2bb403dcf4a5832780969c4662afdedee99c2393b86aa6a" gracePeriod=30 Feb 28 04:56:17 crc kubenswrapper[5014]: I0228 04:56:17.589213 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6" containerName="nova-metadata-metadata" containerID="cri-o://5c8ee2339b57f90aaa1826f2d6799cb71d83820646554e51685ea9c951f7bddb" gracePeriod=30 Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.253040 5014 generic.go:334] "Generic (PLEG): container finished" podID="119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6" containerID="98fe048c4c0aba06e2bb403dcf4a5832780969c4662afdedee99c2393b86aa6a" exitCode=143 Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.253152 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6","Type":"ContainerDied","Data":"98fe048c4c0aba06e2bb403dcf4a5832780969c4662afdedee99c2393b86aa6a"} Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.255163 5014 generic.go:334] "Generic (PLEG): container finished" podID="46ad5798-0340-4e23-947d-4b2ca7cc0895" containerID="b753d3c88e6a6ecb07f5a83db2f280661a60c0cd18ad20379e53df836517cee7" exitCode=0 Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.255190 5014 generic.go:334] "Generic (PLEG): container finished" podID="46ad5798-0340-4e23-947d-4b2ca7cc0895" containerID="5739c3103b0bdada5e78f52af49fc46ecf19deff1684d6246b059f935abe7a5c" exitCode=143 Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.255209 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"46ad5798-0340-4e23-947d-4b2ca7cc0895","Type":"ContainerDied","Data":"b753d3c88e6a6ecb07f5a83db2f280661a60c0cd18ad20379e53df836517cee7"} Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.255232 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"46ad5798-0340-4e23-947d-4b2ca7cc0895","Type":"ContainerDied","Data":"5739c3103b0bdada5e78f52af49fc46ecf19deff1684d6246b059f935abe7a5c"} Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.486946 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.579742 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plnrj\" (UniqueName: \"kubernetes.io/projected/46ad5798-0340-4e23-947d-4b2ca7cc0895-kube-api-access-plnrj\") pod \"46ad5798-0340-4e23-947d-4b2ca7cc0895\" (UID: \"46ad5798-0340-4e23-947d-4b2ca7cc0895\") " Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.579872 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ad5798-0340-4e23-947d-4b2ca7cc0895-config-data\") pod \"46ad5798-0340-4e23-947d-4b2ca7cc0895\" (UID: \"46ad5798-0340-4e23-947d-4b2ca7cc0895\") " Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.579966 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ad5798-0340-4e23-947d-4b2ca7cc0895-internal-tls-certs\") pod \"46ad5798-0340-4e23-947d-4b2ca7cc0895\" (UID: \"46ad5798-0340-4e23-947d-4b2ca7cc0895\") " Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.580122 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46ad5798-0340-4e23-947d-4b2ca7cc0895-logs\") pod \"46ad5798-0340-4e23-947d-4b2ca7cc0895\" (UID: \"46ad5798-0340-4e23-947d-4b2ca7cc0895\") " Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.580189 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ad5798-0340-4e23-947d-4b2ca7cc0895-combined-ca-bundle\") pod \"46ad5798-0340-4e23-947d-4b2ca7cc0895\" (UID: \"46ad5798-0340-4e23-947d-4b2ca7cc0895\") " Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.580239 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ad5798-0340-4e23-947d-4b2ca7cc0895-public-tls-certs\") pod \"46ad5798-0340-4e23-947d-4b2ca7cc0895\" (UID: \"46ad5798-0340-4e23-947d-4b2ca7cc0895\") " Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.581604 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46ad5798-0340-4e23-947d-4b2ca7cc0895-logs" (OuterVolumeSpecName: "logs") pod "46ad5798-0340-4e23-947d-4b2ca7cc0895" (UID: "46ad5798-0340-4e23-947d-4b2ca7cc0895"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.603062 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46ad5798-0340-4e23-947d-4b2ca7cc0895-kube-api-access-plnrj" (OuterVolumeSpecName: "kube-api-access-plnrj") pod "46ad5798-0340-4e23-947d-4b2ca7cc0895" (UID: "46ad5798-0340-4e23-947d-4b2ca7cc0895"). InnerVolumeSpecName "kube-api-access-plnrj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.610081 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ad5798-0340-4e23-947d-4b2ca7cc0895-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46ad5798-0340-4e23-947d-4b2ca7cc0895" (UID: "46ad5798-0340-4e23-947d-4b2ca7cc0895"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.619641 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ad5798-0340-4e23-947d-4b2ca7cc0895-config-data" (OuterVolumeSpecName: "config-data") pod "46ad5798-0340-4e23-947d-4b2ca7cc0895" (UID: "46ad5798-0340-4e23-947d-4b2ca7cc0895"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.636045 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ad5798-0340-4e23-947d-4b2ca7cc0895-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "46ad5798-0340-4e23-947d-4b2ca7cc0895" (UID: "46ad5798-0340-4e23-947d-4b2ca7cc0895"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.639723 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ad5798-0340-4e23-947d-4b2ca7cc0895-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "46ad5798-0340-4e23-947d-4b2ca7cc0895" (UID: "46ad5798-0340-4e23-947d-4b2ca7cc0895"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.687857 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plnrj\" (UniqueName: \"kubernetes.io/projected/46ad5798-0340-4e23-947d-4b2ca7cc0895-kube-api-access-plnrj\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.687910 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ad5798-0340-4e23-947d-4b2ca7cc0895-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.687927 5014 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ad5798-0340-4e23-947d-4b2ca7cc0895-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.687940 5014 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46ad5798-0340-4e23-947d-4b2ca7cc0895-logs\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.687952 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ad5798-0340-4e23-947d-4b2ca7cc0895-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:18 crc kubenswrapper[5014]: I0228 04:56:18.687963 5014 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ad5798-0340-4e23-947d-4b2ca7cc0895-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:19 crc kubenswrapper[5014]: E0228 04:56:19.083040 5014 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3336e496711e2ad799de320a9cb7fcdfc3046fdddf17c6e42eb0becbe91e7955" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 28 04:56:19 crc kubenswrapper[5014]: E0228 04:56:19.084703 5014 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3336e496711e2ad799de320a9cb7fcdfc3046fdddf17c6e42eb0becbe91e7955" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 28 04:56:19 crc kubenswrapper[5014]: E0228 04:56:19.086165 5014 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3336e496711e2ad799de320a9cb7fcdfc3046fdddf17c6e42eb0becbe91e7955" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 28 04:56:19 crc kubenswrapper[5014]: E0228 04:56:19.086202 5014 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="2b76306f-13bd-4df4-a8a2-d3a1eede7020" containerName="nova-scheduler-scheduler" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.264598 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"46ad5798-0340-4e23-947d-4b2ca7cc0895","Type":"ContainerDied","Data":"114289a278caf0b83de3b8f05a25e09fd67679a476f52b8d76abffa826c996b6"} Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.264653 5014 scope.go:117] "RemoveContainer" containerID="b753d3c88e6a6ecb07f5a83db2f280661a60c0cd18ad20379e53df836517cee7" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.264653 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.295033 5014 scope.go:117] "RemoveContainer" containerID="5739c3103b0bdada5e78f52af49fc46ecf19deff1684d6246b059f935abe7a5c" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.305400 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.311575 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.338833 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 28 04:56:19 crc kubenswrapper[5014]: E0228 04:56:19.340450 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="facf8396-8625-4f68-9167-be011dd01a6b" containerName="init" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.340494 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="facf8396-8625-4f68-9167-be011dd01a6b" containerName="init" Feb 28 04:56:19 crc kubenswrapper[5014]: E0228 04:56:19.340538 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47e5f3ce-9596-4be2-a8e1-363a7abd090f" containerName="nova-manage" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.340550 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="47e5f3ce-9596-4be2-a8e1-363a7abd090f" containerName="nova-manage" Feb 28 04:56:19 crc kubenswrapper[5014]: E0228 04:56:19.340590 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="facf8396-8625-4f68-9167-be011dd01a6b" containerName="dnsmasq-dns" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.340601 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="facf8396-8625-4f68-9167-be011dd01a6b" containerName="dnsmasq-dns" Feb 28 04:56:19 crc kubenswrapper[5014]: E0228 04:56:19.340654 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ad5798-0340-4e23-947d-4b2ca7cc0895" containerName="nova-api-api" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.340667 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ad5798-0340-4e23-947d-4b2ca7cc0895" containerName="nova-api-api" Feb 28 04:56:19 crc kubenswrapper[5014]: E0228 04:56:19.340697 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ad5798-0340-4e23-947d-4b2ca7cc0895" containerName="nova-api-log" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.340709 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ad5798-0340-4e23-947d-4b2ca7cc0895" containerName="nova-api-log" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.341514 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="47e5f3ce-9596-4be2-a8e1-363a7abd090f" containerName="nova-manage" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.341576 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="facf8396-8625-4f68-9167-be011dd01a6b" containerName="dnsmasq-dns" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.341596 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="46ad5798-0340-4e23-947d-4b2ca7cc0895" containerName="nova-api-api" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.341621 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="46ad5798-0340-4e23-947d-4b2ca7cc0895" containerName="nova-api-log" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.347134 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.356521 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.356535 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.357373 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.434959 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.505743 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4e1baa8-fe04-453a-8462-e7de1e98ba73-logs\") pod \"nova-api-0\" (UID: \"d4e1baa8-fe04-453a-8462-e7de1e98ba73\") " pod="openstack/nova-api-0" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.505875 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tftrt\" (UniqueName: \"kubernetes.io/projected/d4e1baa8-fe04-453a-8462-e7de1e98ba73-kube-api-access-tftrt\") pod \"nova-api-0\" (UID: \"d4e1baa8-fe04-453a-8462-e7de1e98ba73\") " pod="openstack/nova-api-0" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.505903 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4e1baa8-fe04-453a-8462-e7de1e98ba73-config-data\") pod \"nova-api-0\" (UID: \"d4e1baa8-fe04-453a-8462-e7de1e98ba73\") " pod="openstack/nova-api-0" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.506025 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4e1baa8-fe04-453a-8462-e7de1e98ba73-public-tls-certs\") pod \"nova-api-0\" (UID: \"d4e1baa8-fe04-453a-8462-e7de1e98ba73\") " pod="openstack/nova-api-0" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.506101 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4e1baa8-fe04-453a-8462-e7de1e98ba73-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d4e1baa8-fe04-453a-8462-e7de1e98ba73\") " pod="openstack/nova-api-0" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.506174 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e1baa8-fe04-453a-8462-e7de1e98ba73-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d4e1baa8-fe04-453a-8462-e7de1e98ba73\") " pod="openstack/nova-api-0" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.607752 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e1baa8-fe04-453a-8462-e7de1e98ba73-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d4e1baa8-fe04-453a-8462-e7de1e98ba73\") " pod="openstack/nova-api-0" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.607843 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4e1baa8-fe04-453a-8462-e7de1e98ba73-logs\") pod \"nova-api-0\" (UID: \"d4e1baa8-fe04-453a-8462-e7de1e98ba73\") " pod="openstack/nova-api-0" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.607955 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tftrt\" (UniqueName: \"kubernetes.io/projected/d4e1baa8-fe04-453a-8462-e7de1e98ba73-kube-api-access-tftrt\") pod \"nova-api-0\" (UID: \"d4e1baa8-fe04-453a-8462-e7de1e98ba73\") " pod="openstack/nova-api-0" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.607991 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4e1baa8-fe04-453a-8462-e7de1e98ba73-config-data\") pod \"nova-api-0\" (UID: \"d4e1baa8-fe04-453a-8462-e7de1e98ba73\") " pod="openstack/nova-api-0" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.608034 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4e1baa8-fe04-453a-8462-e7de1e98ba73-public-tls-certs\") pod \"nova-api-0\" (UID: \"d4e1baa8-fe04-453a-8462-e7de1e98ba73\") " pod="openstack/nova-api-0" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.608093 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4e1baa8-fe04-453a-8462-e7de1e98ba73-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d4e1baa8-fe04-453a-8462-e7de1e98ba73\") " pod="openstack/nova-api-0" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.608509 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d4e1baa8-fe04-453a-8462-e7de1e98ba73-logs\") pod \"nova-api-0\" (UID: \"d4e1baa8-fe04-453a-8462-e7de1e98ba73\") " pod="openstack/nova-api-0" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.612445 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4e1baa8-fe04-453a-8462-e7de1e98ba73-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d4e1baa8-fe04-453a-8462-e7de1e98ba73\") " pod="openstack/nova-api-0" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.613229 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4e1baa8-fe04-453a-8462-e7de1e98ba73-public-tls-certs\") pod \"nova-api-0\" (UID: \"d4e1baa8-fe04-453a-8462-e7de1e98ba73\") " pod="openstack/nova-api-0" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.614481 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4e1baa8-fe04-453a-8462-e7de1e98ba73-config-data\") pod \"nova-api-0\" (UID: \"d4e1baa8-fe04-453a-8462-e7de1e98ba73\") " pod="openstack/nova-api-0" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.615174 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4e1baa8-fe04-453a-8462-e7de1e98ba73-internal-tls-certs\") pod \"nova-api-0\" (UID: \"d4e1baa8-fe04-453a-8462-e7de1e98ba73\") " pod="openstack/nova-api-0" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.629265 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tftrt\" (UniqueName: \"kubernetes.io/projected/d4e1baa8-fe04-453a-8462-e7de1e98ba73-kube-api-access-tftrt\") pod \"nova-api-0\" (UID: \"d4e1baa8-fe04-453a-8462-e7de1e98ba73\") " pod="openstack/nova-api-0" Feb 28 04:56:19 crc kubenswrapper[5014]: I0228 04:56:19.732113 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 28 04:56:20 crc kubenswrapper[5014]: I0228 04:56:20.168345 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 28 04:56:20 crc kubenswrapper[5014]: I0228 04:56:20.193771 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46ad5798-0340-4e23-947d-4b2ca7cc0895" path="/var/lib/kubelet/pods/46ad5798-0340-4e23-947d-4b2ca7cc0895/volumes" Feb 28 04:56:20 crc kubenswrapper[5014]: I0228 04:56:20.285468 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4e1baa8-fe04-453a-8462-e7de1e98ba73","Type":"ContainerStarted","Data":"f642c8fca76707b1c0522a18e99cce05aa69d083907dad782519f71756f457b2"} Feb 28 04:56:21 crc kubenswrapper[5014]: I0228 04:56:21.006014 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": read tcp 10.217.0.2:54642->10.217.0.195:8775: read: connection reset by peer" Feb 28 04:56:21 crc kubenswrapper[5014]: I0228 04:56:21.006014 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.195:8775/\": read tcp 10.217.0.2:54636->10.217.0.195:8775: read: connection reset by peer" Feb 28 04:56:21 crc kubenswrapper[5014]: I0228 04:56:21.307478 5014 generic.go:334] "Generic (PLEG): container finished" podID="119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6" containerID="5c8ee2339b57f90aaa1826f2d6799cb71d83820646554e51685ea9c951f7bddb" exitCode=0 Feb 28 04:56:21 crc kubenswrapper[5014]: I0228 04:56:21.307533 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6","Type":"ContainerDied","Data":"5c8ee2339b57f90aaa1826f2d6799cb71d83820646554e51685ea9c951f7bddb"} Feb 28 04:56:21 crc kubenswrapper[5014]: I0228 04:56:21.312213 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4e1baa8-fe04-453a-8462-e7de1e98ba73","Type":"ContainerStarted","Data":"4659438716ad6f9caba06c7b70e17d4e69eabb7d876145a405a493c0ea8b8743"} Feb 28 04:56:21 crc kubenswrapper[5014]: I0228 04:56:21.312243 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d4e1baa8-fe04-453a-8462-e7de1e98ba73","Type":"ContainerStarted","Data":"0be6d8f6673674ea52e534ef5e8472a57616e0b6036c3a90e531090264e4a4d0"} Feb 28 04:56:21 crc kubenswrapper[5014]: I0228 04:56:21.652059 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 04:56:21 crc kubenswrapper[5014]: I0228 04:56:21.672871 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.6728476370000003 podStartE2EDuration="2.672847637s" podCreationTimestamp="2026-02-28 04:56:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:56:21.344463323 +0000 UTC m=+1370.014589233" watchObservedRunningTime="2026-02-28 04:56:21.672847637 +0000 UTC m=+1370.342973547" Feb 28 04:56:21 crc kubenswrapper[5014]: I0228 04:56:21.762786 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-combined-ca-bundle\") pod \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\" (UID: \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\") " Feb 28 04:56:21 crc kubenswrapper[5014]: I0228 04:56:21.762936 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-logs\") pod \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\" (UID: \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\") " Feb 28 04:56:21 crc kubenswrapper[5014]: I0228 04:56:21.762987 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4l58l\" (UniqueName: \"kubernetes.io/projected/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-kube-api-access-4l58l\") pod \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\" (UID: \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\") " Feb 28 04:56:21 crc kubenswrapper[5014]: I0228 04:56:21.763150 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-config-data\") pod \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\" (UID: \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\") " Feb 28 04:56:21 crc kubenswrapper[5014]: I0228 04:56:21.763374 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-nova-metadata-tls-certs\") pod \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\" (UID: \"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6\") " Feb 28 04:56:21 crc kubenswrapper[5014]: I0228 04:56:21.765436 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-logs" (OuterVolumeSpecName: "logs") pod "119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6" (UID: "119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:56:21 crc kubenswrapper[5014]: I0228 04:56:21.783524 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-kube-api-access-4l58l" (OuterVolumeSpecName: "kube-api-access-4l58l") pod "119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6" (UID: "119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6"). InnerVolumeSpecName "kube-api-access-4l58l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:56:21 crc kubenswrapper[5014]: I0228 04:56:21.811775 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-config-data" (OuterVolumeSpecName: "config-data") pod "119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6" (UID: "119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:56:21 crc kubenswrapper[5014]: I0228 04:56:21.823081 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6" (UID: "119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:56:21 crc kubenswrapper[5014]: I0228 04:56:21.848177 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6" (UID: "119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:56:21 crc kubenswrapper[5014]: I0228 04:56:21.869949 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:21 crc kubenswrapper[5014]: I0228 04:56:21.869994 5014 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:21 crc kubenswrapper[5014]: I0228 04:56:21.870004 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:21 crc kubenswrapper[5014]: I0228 04:56:21.870014 5014 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-logs\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:21 crc kubenswrapper[5014]: I0228 04:56:21.870023 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4l58l\" (UniqueName: \"kubernetes.io/projected/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6-kube-api-access-4l58l\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.326779 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.326774 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6","Type":"ContainerDied","Data":"fe0081b3872eee7c96ff9fc7e87a7764e71d89f691869e2286eae4a280eba09a"} Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.327647 5014 scope.go:117] "RemoveContainer" containerID="5c8ee2339b57f90aaa1826f2d6799cb71d83820646554e51685ea9c951f7bddb" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.349982 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.357829 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.375959 5014 scope.go:117] "RemoveContainer" containerID="98fe048c4c0aba06e2bb403dcf4a5832780969c4662afdedee99c2393b86aa6a" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.392210 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 28 04:56:22 crc kubenswrapper[5014]: E0228 04:56:22.392733 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6" containerName="nova-metadata-log" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.392764 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6" containerName="nova-metadata-log" Feb 28 04:56:22 crc kubenswrapper[5014]: E0228 04:56:22.392789 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6" containerName="nova-metadata-metadata" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.392801 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6" containerName="nova-metadata-metadata" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.393253 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6" containerName="nova-metadata-log" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.393294 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6" containerName="nova-metadata-metadata" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.394728 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.398387 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.398531 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.421664 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.480376 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66tj9\" (UniqueName: \"kubernetes.io/projected/d354f3a0-5e09-438a-bb5d-385b2ab4857f-kube-api-access-66tj9\") pod \"nova-metadata-0\" (UID: \"d354f3a0-5e09-438a-bb5d-385b2ab4857f\") " pod="openstack/nova-metadata-0" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.480505 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d354f3a0-5e09-438a-bb5d-385b2ab4857f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d354f3a0-5e09-438a-bb5d-385b2ab4857f\") " pod="openstack/nova-metadata-0" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.480728 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d354f3a0-5e09-438a-bb5d-385b2ab4857f-logs\") pod \"nova-metadata-0\" (UID: \"d354f3a0-5e09-438a-bb5d-385b2ab4857f\") " pod="openstack/nova-metadata-0" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.480772 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d354f3a0-5e09-438a-bb5d-385b2ab4857f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d354f3a0-5e09-438a-bb5d-385b2ab4857f\") " pod="openstack/nova-metadata-0" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.481099 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d354f3a0-5e09-438a-bb5d-385b2ab4857f-config-data\") pod \"nova-metadata-0\" (UID: \"d354f3a0-5e09-438a-bb5d-385b2ab4857f\") " pod="openstack/nova-metadata-0" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.582417 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d354f3a0-5e09-438a-bb5d-385b2ab4857f-logs\") pod \"nova-metadata-0\" (UID: \"d354f3a0-5e09-438a-bb5d-385b2ab4857f\") " pod="openstack/nova-metadata-0" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.582473 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d354f3a0-5e09-438a-bb5d-385b2ab4857f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d354f3a0-5e09-438a-bb5d-385b2ab4857f\") " pod="openstack/nova-metadata-0" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.582527 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d354f3a0-5e09-438a-bb5d-385b2ab4857f-config-data\") pod \"nova-metadata-0\" (UID: \"d354f3a0-5e09-438a-bb5d-385b2ab4857f\") " pod="openstack/nova-metadata-0" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.582578 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66tj9\" (UniqueName: \"kubernetes.io/projected/d354f3a0-5e09-438a-bb5d-385b2ab4857f-kube-api-access-66tj9\") pod \"nova-metadata-0\" (UID: \"d354f3a0-5e09-438a-bb5d-385b2ab4857f\") " pod="openstack/nova-metadata-0" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.582636 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d354f3a0-5e09-438a-bb5d-385b2ab4857f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d354f3a0-5e09-438a-bb5d-385b2ab4857f\") " pod="openstack/nova-metadata-0" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.582942 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d354f3a0-5e09-438a-bb5d-385b2ab4857f-logs\") pod \"nova-metadata-0\" (UID: \"d354f3a0-5e09-438a-bb5d-385b2ab4857f\") " pod="openstack/nova-metadata-0" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.588472 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d354f3a0-5e09-438a-bb5d-385b2ab4857f-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d354f3a0-5e09-438a-bb5d-385b2ab4857f\") " pod="openstack/nova-metadata-0" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.588490 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d354f3a0-5e09-438a-bb5d-385b2ab4857f-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d354f3a0-5e09-438a-bb5d-385b2ab4857f\") " pod="openstack/nova-metadata-0" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.589682 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d354f3a0-5e09-438a-bb5d-385b2ab4857f-config-data\") pod \"nova-metadata-0\" (UID: \"d354f3a0-5e09-438a-bb5d-385b2ab4857f\") " pod="openstack/nova-metadata-0" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.616500 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66tj9\" (UniqueName: \"kubernetes.io/projected/d354f3a0-5e09-438a-bb5d-385b2ab4857f-kube-api-access-66tj9\") pod \"nova-metadata-0\" (UID: \"d354f3a0-5e09-438a-bb5d-385b2ab4857f\") " pod="openstack/nova-metadata-0" Feb 28 04:56:22 crc kubenswrapper[5014]: I0228 04:56:22.719459 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 28 04:56:23 crc kubenswrapper[5014]: I0228 04:56:23.194588 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 28 04:56:23 crc kubenswrapper[5014]: W0228 04:56:23.200999 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd354f3a0_5e09_438a_bb5d_385b2ab4857f.slice/crio-fa4a744c42d9b206c5981ae43443e0442e9c6e9189d1f164daadae5470cabccb WatchSource:0}: Error finding container fa4a744c42d9b206c5981ae43443e0442e9c6e9189d1f164daadae5470cabccb: Status 404 returned error can't find the container with id fa4a744c42d9b206c5981ae43443e0442e9c6e9189d1f164daadae5470cabccb Feb 28 04:56:23 crc kubenswrapper[5014]: I0228 04:56:23.359433 5014 generic.go:334] "Generic (PLEG): container finished" podID="2b76306f-13bd-4df4-a8a2-d3a1eede7020" containerID="3336e496711e2ad799de320a9cb7fcdfc3046fdddf17c6e42eb0becbe91e7955" exitCode=0 Feb 28 04:56:23 crc kubenswrapper[5014]: I0228 04:56:23.359510 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2b76306f-13bd-4df4-a8a2-d3a1eede7020","Type":"ContainerDied","Data":"3336e496711e2ad799de320a9cb7fcdfc3046fdddf17c6e42eb0becbe91e7955"} Feb 28 04:56:23 crc kubenswrapper[5014]: I0228 04:56:23.359538 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"2b76306f-13bd-4df4-a8a2-d3a1eede7020","Type":"ContainerDied","Data":"928becd321eb577895bfc83ffd3decab53001d6b9cb8e5ae4fcadf6d9d690300"} Feb 28 04:56:23 crc kubenswrapper[5014]: I0228 04:56:23.359563 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="928becd321eb577895bfc83ffd3decab53001d6b9cb8e5ae4fcadf6d9d690300" Feb 28 04:56:23 crc kubenswrapper[5014]: I0228 04:56:23.363219 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d354f3a0-5e09-438a-bb5d-385b2ab4857f","Type":"ContainerStarted","Data":"fa4a744c42d9b206c5981ae43443e0442e9c6e9189d1f164daadae5470cabccb"} Feb 28 04:56:23 crc kubenswrapper[5014]: I0228 04:56:23.380735 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 04:56:23 crc kubenswrapper[5014]: I0228 04:56:23.516776 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b76306f-13bd-4df4-a8a2-d3a1eede7020-config-data\") pod \"2b76306f-13bd-4df4-a8a2-d3a1eede7020\" (UID: \"2b76306f-13bd-4df4-a8a2-d3a1eede7020\") " Feb 28 04:56:23 crc kubenswrapper[5014]: I0228 04:56:23.516883 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b76306f-13bd-4df4-a8a2-d3a1eede7020-combined-ca-bundle\") pod \"2b76306f-13bd-4df4-a8a2-d3a1eede7020\" (UID: \"2b76306f-13bd-4df4-a8a2-d3a1eede7020\") " Feb 28 04:56:23 crc kubenswrapper[5014]: I0228 04:56:23.517099 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-274cn\" (UniqueName: \"kubernetes.io/projected/2b76306f-13bd-4df4-a8a2-d3a1eede7020-kube-api-access-274cn\") pod \"2b76306f-13bd-4df4-a8a2-d3a1eede7020\" (UID: \"2b76306f-13bd-4df4-a8a2-d3a1eede7020\") " Feb 28 04:56:23 crc kubenswrapper[5014]: I0228 04:56:23.520426 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b76306f-13bd-4df4-a8a2-d3a1eede7020-kube-api-access-274cn" (OuterVolumeSpecName: "kube-api-access-274cn") pod "2b76306f-13bd-4df4-a8a2-d3a1eede7020" (UID: "2b76306f-13bd-4df4-a8a2-d3a1eede7020"). InnerVolumeSpecName "kube-api-access-274cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:56:23 crc kubenswrapper[5014]: I0228 04:56:23.557618 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b76306f-13bd-4df4-a8a2-d3a1eede7020-config-data" (OuterVolumeSpecName: "config-data") pod "2b76306f-13bd-4df4-a8a2-d3a1eede7020" (UID: "2b76306f-13bd-4df4-a8a2-d3a1eede7020"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:56:23 crc kubenswrapper[5014]: I0228 04:56:23.564623 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b76306f-13bd-4df4-a8a2-d3a1eede7020-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2b76306f-13bd-4df4-a8a2-d3a1eede7020" (UID: "2b76306f-13bd-4df4-a8a2-d3a1eede7020"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:56:23 crc kubenswrapper[5014]: I0228 04:56:23.619195 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-274cn\" (UniqueName: \"kubernetes.io/projected/2b76306f-13bd-4df4-a8a2-d3a1eede7020-kube-api-access-274cn\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:23 crc kubenswrapper[5014]: I0228 04:56:23.619234 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2b76306f-13bd-4df4-a8a2-d3a1eede7020-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:23 crc kubenswrapper[5014]: I0228 04:56:23.619249 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b76306f-13bd-4df4-a8a2-d3a1eede7020-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:56:24 crc kubenswrapper[5014]: I0228 04:56:24.184661 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6" path="/var/lib/kubelet/pods/119d5cde-7b4b-4f99-b1c1-0bef4e8cc3a6/volumes" Feb 28 04:56:24 crc kubenswrapper[5014]: I0228 04:56:24.379005 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 04:56:24 crc kubenswrapper[5014]: I0228 04:56:24.379012 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d354f3a0-5e09-438a-bb5d-385b2ab4857f","Type":"ContainerStarted","Data":"078d19b3e2e1b372395738df54902a4bf49452d43d99f0c3eecdcb7a22332d6e"} Feb 28 04:56:24 crc kubenswrapper[5014]: I0228 04:56:24.379102 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d354f3a0-5e09-438a-bb5d-385b2ab4857f","Type":"ContainerStarted","Data":"18fb634ee0e54c59689e5ad0b858455c89b5c516c05918d80ea7200238e35f9b"} Feb 28 04:56:24 crc kubenswrapper[5014]: I0228 04:56:24.419165 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.419134469 podStartE2EDuration="2.419134469s" podCreationTimestamp="2026-02-28 04:56:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:56:24.402662034 +0000 UTC m=+1373.072787934" watchObservedRunningTime="2026-02-28 04:56:24.419134469 +0000 UTC m=+1373.089260419" Feb 28 04:56:24 crc kubenswrapper[5014]: I0228 04:56:24.442368 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 04:56:24 crc kubenswrapper[5014]: I0228 04:56:24.456890 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 04:56:24 crc kubenswrapper[5014]: I0228 04:56:24.479866 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 04:56:24 crc kubenswrapper[5014]: E0228 04:56:24.480664 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b76306f-13bd-4df4-a8a2-d3a1eede7020" containerName="nova-scheduler-scheduler" Feb 28 04:56:24 crc kubenswrapper[5014]: I0228 04:56:24.480691 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b76306f-13bd-4df4-a8a2-d3a1eede7020" containerName="nova-scheduler-scheduler" Feb 28 04:56:24 crc kubenswrapper[5014]: I0228 04:56:24.480966 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b76306f-13bd-4df4-a8a2-d3a1eede7020" containerName="nova-scheduler-scheduler" Feb 28 04:56:24 crc kubenswrapper[5014]: I0228 04:56:24.481781 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 04:56:24 crc kubenswrapper[5014]: I0228 04:56:24.483798 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 28 04:56:24 crc kubenswrapper[5014]: I0228 04:56:24.495154 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 04:56:24 crc kubenswrapper[5014]: I0228 04:56:24.639720 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b66aa07-e591-474f-b1f0-442147425299-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7b66aa07-e591-474f-b1f0-442147425299\") " pod="openstack/nova-scheduler-0" Feb 28 04:56:24 crc kubenswrapper[5014]: I0228 04:56:24.639928 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b66aa07-e591-474f-b1f0-442147425299-config-data\") pod \"nova-scheduler-0\" (UID: \"7b66aa07-e591-474f-b1f0-442147425299\") " pod="openstack/nova-scheduler-0" Feb 28 04:56:24 crc kubenswrapper[5014]: I0228 04:56:24.640064 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9qbt\" (UniqueName: \"kubernetes.io/projected/7b66aa07-e591-474f-b1f0-442147425299-kube-api-access-v9qbt\") pod \"nova-scheduler-0\" (UID: \"7b66aa07-e591-474f-b1f0-442147425299\") " pod="openstack/nova-scheduler-0" Feb 28 04:56:24 crc kubenswrapper[5014]: I0228 04:56:24.741446 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9qbt\" (UniqueName: \"kubernetes.io/projected/7b66aa07-e591-474f-b1f0-442147425299-kube-api-access-v9qbt\") pod \"nova-scheduler-0\" (UID: \"7b66aa07-e591-474f-b1f0-442147425299\") " pod="openstack/nova-scheduler-0" Feb 28 04:56:24 crc kubenswrapper[5014]: I0228 04:56:24.741532 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b66aa07-e591-474f-b1f0-442147425299-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7b66aa07-e591-474f-b1f0-442147425299\") " pod="openstack/nova-scheduler-0" Feb 28 04:56:24 crc kubenswrapper[5014]: I0228 04:56:24.741609 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b66aa07-e591-474f-b1f0-442147425299-config-data\") pod \"nova-scheduler-0\" (UID: \"7b66aa07-e591-474f-b1f0-442147425299\") " pod="openstack/nova-scheduler-0" Feb 28 04:56:24 crc kubenswrapper[5014]: I0228 04:56:24.752537 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b66aa07-e591-474f-b1f0-442147425299-config-data\") pod \"nova-scheduler-0\" (UID: \"7b66aa07-e591-474f-b1f0-442147425299\") " pod="openstack/nova-scheduler-0" Feb 28 04:56:24 crc kubenswrapper[5014]: I0228 04:56:24.752620 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b66aa07-e591-474f-b1f0-442147425299-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"7b66aa07-e591-474f-b1f0-442147425299\") " pod="openstack/nova-scheduler-0" Feb 28 04:56:24 crc kubenswrapper[5014]: I0228 04:56:24.771178 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9qbt\" (UniqueName: \"kubernetes.io/projected/7b66aa07-e591-474f-b1f0-442147425299-kube-api-access-v9qbt\") pod \"nova-scheduler-0\" (UID: \"7b66aa07-e591-474f-b1f0-442147425299\") " pod="openstack/nova-scheduler-0" Feb 28 04:56:24 crc kubenswrapper[5014]: I0228 04:56:24.801354 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 28 04:56:25 crc kubenswrapper[5014]: I0228 04:56:25.360170 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 28 04:56:25 crc kubenswrapper[5014]: I0228 04:56:25.395778 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7b66aa07-e591-474f-b1f0-442147425299","Type":"ContainerStarted","Data":"11c5393d2b4bc67de4bf0ee432d08f038fa20046ced660e1ab3ac53f7d7ed3b1"} Feb 28 04:56:26 crc kubenswrapper[5014]: I0228 04:56:26.186282 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b76306f-13bd-4df4-a8a2-d3a1eede7020" path="/var/lib/kubelet/pods/2b76306f-13bd-4df4-a8a2-d3a1eede7020/volumes" Feb 28 04:56:26 crc kubenswrapper[5014]: I0228 04:56:26.420504 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"7b66aa07-e591-474f-b1f0-442147425299","Type":"ContainerStarted","Data":"e5a8c4042e7ea5ee81d4312c4ce91b0b439e602cf2393bae56f1b322915bff2d"} Feb 28 04:56:26 crc kubenswrapper[5014]: I0228 04:56:26.457479 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.457458831 podStartE2EDuration="2.457458831s" podCreationTimestamp="2026-02-28 04:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:56:26.447754108 +0000 UTC m=+1375.117880028" watchObservedRunningTime="2026-02-28 04:56:26.457458831 +0000 UTC m=+1375.127584751" Feb 28 04:56:27 crc kubenswrapper[5014]: I0228 04:56:27.720241 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 04:56:27 crc kubenswrapper[5014]: I0228 04:56:27.720778 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 28 04:56:29 crc kubenswrapper[5014]: I0228 04:56:29.733378 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 04:56:29 crc kubenswrapper[5014]: I0228 04:56:29.733694 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 28 04:56:29 crc kubenswrapper[5014]: I0228 04:56:29.801838 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 28 04:56:30 crc kubenswrapper[5014]: I0228 04:56:30.746125 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d4e1baa8-fe04-453a-8462-e7de1e98ba73" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 28 04:56:30 crc kubenswrapper[5014]: I0228 04:56:30.746144 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d4e1baa8-fe04-453a-8462-e7de1e98ba73" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.207:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 04:56:32 crc kubenswrapper[5014]: I0228 04:56:32.719596 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 28 04:56:32 crc kubenswrapper[5014]: I0228 04:56:32.720089 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 28 04:56:33 crc kubenswrapper[5014]: I0228 04:56:33.733134 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d354f3a0-5e09-438a-bb5d-385b2ab4857f" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 04:56:33 crc kubenswrapper[5014]: I0228 04:56:33.733153 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d354f3a0-5e09-438a-bb5d-385b2ab4857f" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.208:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 28 04:56:34 crc kubenswrapper[5014]: I0228 04:56:34.802178 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 28 04:56:34 crc kubenswrapper[5014]: I0228 04:56:34.839953 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 28 04:56:35 crc kubenswrapper[5014]: I0228 04:56:35.563724 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 28 04:56:37 crc kubenswrapper[5014]: I0228 04:56:37.563718 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 28 04:56:39 crc kubenswrapper[5014]: I0228 04:56:39.743754 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 28 04:56:39 crc kubenswrapper[5014]: I0228 04:56:39.746117 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 28 04:56:39 crc kubenswrapper[5014]: I0228 04:56:39.746652 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 28 04:56:39 crc kubenswrapper[5014]: I0228 04:56:39.754323 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 28 04:56:40 crc kubenswrapper[5014]: I0228 04:56:40.574768 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 28 04:56:40 crc kubenswrapper[5014]: I0228 04:56:40.582167 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 28 04:56:42 crc kubenswrapper[5014]: I0228 04:56:42.728255 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 28 04:56:42 crc kubenswrapper[5014]: I0228 04:56:42.735106 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 28 04:56:42 crc kubenswrapper[5014]: I0228 04:56:42.741181 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 28 04:56:43 crc kubenswrapper[5014]: I0228 04:56:43.606550 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 28 04:56:45 crc kubenswrapper[5014]: I0228 04:56:45.706620 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:56:45 crc kubenswrapper[5014]: I0228 04:56:45.707027 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:56:51 crc kubenswrapper[5014]: I0228 04:56:51.152477 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 04:56:52 crc kubenswrapper[5014]: I0228 04:56:52.080480 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 04:56:53 crc kubenswrapper[5014]: I0228 04:56:53.906748 5014 scope.go:117] "RemoveContainer" containerID="36e86e4f808ab2a90ca07bb71d852d074d15aad41b7d840a859b88051549d83b" Feb 28 04:56:55 crc kubenswrapper[5014]: I0228 04:56:55.240112 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="351fb773-0669-41c0-aee8-0469f34d64c9" containerName="rabbitmq" containerID="cri-o://29a94a8a21103b36ec5a9c08e355416cad5772f0c62b047c91ce146979b30c28" gracePeriod=604796 Feb 28 04:56:56 crc kubenswrapper[5014]: I0228 04:56:56.126910 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="46e75f06-c8df-44b8-a6e4-8f663e8b0a1a" containerName="rabbitmq" containerID="cri-o://ef252e73b4755ebb79ca0372edd6145d6575e4965f3ab8414b00083c7d04ef30" gracePeriod=604796 Feb 28 04:56:57 crc kubenswrapper[5014]: I0228 04:56:57.631911 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="46e75f06-c8df-44b8-a6e4-8f663e8b0a1a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Feb 28 04:56:57 crc kubenswrapper[5014]: I0228 04:56:57.948440 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="351fb773-0669-41c0-aee8-0469f34d64c9" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Feb 28 04:57:01 crc kubenswrapper[5014]: I0228 04:57:01.818691 5014 generic.go:334] "Generic (PLEG): container finished" podID="351fb773-0669-41c0-aee8-0469f34d64c9" containerID="29a94a8a21103b36ec5a9c08e355416cad5772f0c62b047c91ce146979b30c28" exitCode=0 Feb 28 04:57:01 crc kubenswrapper[5014]: I0228 04:57:01.818791 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"351fb773-0669-41c0-aee8-0469f34d64c9","Type":"ContainerDied","Data":"29a94a8a21103b36ec5a9c08e355416cad5772f0c62b047c91ce146979b30c28"} Feb 28 04:57:01 crc kubenswrapper[5014]: I0228 04:57:01.819475 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"351fb773-0669-41c0-aee8-0469f34d64c9","Type":"ContainerDied","Data":"e6db41dc1fdc6643734cd7b0c3b20b5e954611ce2b368a0eef3a854f905053ab"} Feb 28 04:57:01 crc kubenswrapper[5014]: I0228 04:57:01.819511 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6db41dc1fdc6643734cd7b0c3b20b5e954611ce2b368a0eef3a854f905053ab" Feb 28 04:57:01 crc kubenswrapper[5014]: I0228 04:57:01.846840 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 28 04:57:01 crc kubenswrapper[5014]: I0228 04:57:01.997581 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/351fb773-0669-41c0-aee8-0469f34d64c9-config-data\") pod \"351fb773-0669-41c0-aee8-0469f34d64c9\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " Feb 28 04:57:01 crc kubenswrapper[5014]: I0228 04:57:01.997644 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/351fb773-0669-41c0-aee8-0469f34d64c9-erlang-cookie-secret\") pod \"351fb773-0669-41c0-aee8-0469f34d64c9\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " Feb 28 04:57:01 crc kubenswrapper[5014]: I0228 04:57:01.997732 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"351fb773-0669-41c0-aee8-0469f34d64c9\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " Feb 28 04:57:01 crc kubenswrapper[5014]: I0228 04:57:01.997756 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/351fb773-0669-41c0-aee8-0469f34d64c9-rabbitmq-confd\") pod \"351fb773-0669-41c0-aee8-0469f34d64c9\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " Feb 28 04:57:01 crc kubenswrapper[5014]: I0228 04:57:01.997782 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/351fb773-0669-41c0-aee8-0469f34d64c9-rabbitmq-tls\") pod \"351fb773-0669-41c0-aee8-0469f34d64c9\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " Feb 28 04:57:01 crc kubenswrapper[5014]: I0228 04:57:01.997850 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/351fb773-0669-41c0-aee8-0469f34d64c9-rabbitmq-plugins\") pod \"351fb773-0669-41c0-aee8-0469f34d64c9\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " Feb 28 04:57:01 crc kubenswrapper[5014]: I0228 04:57:01.997923 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/351fb773-0669-41c0-aee8-0469f34d64c9-pod-info\") pod \"351fb773-0669-41c0-aee8-0469f34d64c9\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " Feb 28 04:57:01 crc kubenswrapper[5014]: I0228 04:57:01.997974 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/351fb773-0669-41c0-aee8-0469f34d64c9-plugins-conf\") pod \"351fb773-0669-41c0-aee8-0469f34d64c9\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " Feb 28 04:57:01 crc kubenswrapper[5014]: I0228 04:57:01.998048 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwbcj\" (UniqueName: \"kubernetes.io/projected/351fb773-0669-41c0-aee8-0469f34d64c9-kube-api-access-fwbcj\") pod \"351fb773-0669-41c0-aee8-0469f34d64c9\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " Feb 28 04:57:01 crc kubenswrapper[5014]: I0228 04:57:01.998113 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/351fb773-0669-41c0-aee8-0469f34d64c9-rabbitmq-erlang-cookie\") pod \"351fb773-0669-41c0-aee8-0469f34d64c9\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " Feb 28 04:57:01 crc kubenswrapper[5014]: I0228 04:57:01.998149 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/351fb773-0669-41c0-aee8-0469f34d64c9-server-conf\") pod \"351fb773-0669-41c0-aee8-0469f34d64c9\" (UID: \"351fb773-0669-41c0-aee8-0469f34d64c9\") " Feb 28 04:57:01 crc kubenswrapper[5014]: I0228 04:57:01.998686 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/351fb773-0669-41c0-aee8-0469f34d64c9-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "351fb773-0669-41c0-aee8-0469f34d64c9" (UID: "351fb773-0669-41c0-aee8-0469f34d64c9"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:57:01 crc kubenswrapper[5014]: I0228 04:57:01.999277 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/351fb773-0669-41c0-aee8-0469f34d64c9-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "351fb773-0669-41c0-aee8-0469f34d64c9" (UID: "351fb773-0669-41c0-aee8-0469f34d64c9"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.000293 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/351fb773-0669-41c0-aee8-0469f34d64c9-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "351fb773-0669-41c0-aee8-0469f34d64c9" (UID: "351fb773-0669-41c0-aee8-0469f34d64c9"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.003040 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/351fb773-0669-41c0-aee8-0469f34d64c9-pod-info" (OuterVolumeSpecName: "pod-info") pod "351fb773-0669-41c0-aee8-0469f34d64c9" (UID: "351fb773-0669-41c0-aee8-0469f34d64c9"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.003277 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/351fb773-0669-41c0-aee8-0469f34d64c9-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "351fb773-0669-41c0-aee8-0469f34d64c9" (UID: "351fb773-0669-41c0-aee8-0469f34d64c9"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.004475 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/351fb773-0669-41c0-aee8-0469f34d64c9-kube-api-access-fwbcj" (OuterVolumeSpecName: "kube-api-access-fwbcj") pod "351fb773-0669-41c0-aee8-0469f34d64c9" (UID: "351fb773-0669-41c0-aee8-0469f34d64c9"). InnerVolumeSpecName "kube-api-access-fwbcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.039362 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/351fb773-0669-41c0-aee8-0469f34d64c9-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "351fb773-0669-41c0-aee8-0469f34d64c9" (UID: "351fb773-0669-41c0-aee8-0469f34d64c9"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.041439 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/351fb773-0669-41c0-aee8-0469f34d64c9-config-data" (OuterVolumeSpecName: "config-data") pod "351fb773-0669-41c0-aee8-0469f34d64c9" (UID: "351fb773-0669-41c0-aee8-0469f34d64c9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.052400 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "351fb773-0669-41c0-aee8-0469f34d64c9" (UID: "351fb773-0669-41c0-aee8-0469f34d64c9"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.072475 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/351fb773-0669-41c0-aee8-0469f34d64c9-server-conf" (OuterVolumeSpecName: "server-conf") pod "351fb773-0669-41c0-aee8-0469f34d64c9" (UID: "351fb773-0669-41c0-aee8-0469f34d64c9"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.102552 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwbcj\" (UniqueName: \"kubernetes.io/projected/351fb773-0669-41c0-aee8-0469f34d64c9-kube-api-access-fwbcj\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.102590 5014 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/351fb773-0669-41c0-aee8-0469f34d64c9-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.102604 5014 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/351fb773-0669-41c0-aee8-0469f34d64c9-server-conf\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.102615 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/351fb773-0669-41c0-aee8-0469f34d64c9-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.102626 5014 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/351fb773-0669-41c0-aee8-0469f34d64c9-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.102668 5014 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.102680 5014 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/351fb773-0669-41c0-aee8-0469f34d64c9-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.102692 5014 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/351fb773-0669-41c0-aee8-0469f34d64c9-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.102703 5014 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/351fb773-0669-41c0-aee8-0469f34d64c9-pod-info\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.112966 5014 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/351fb773-0669-41c0-aee8-0469f34d64c9-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.132846 5014 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.154893 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/351fb773-0669-41c0-aee8-0469f34d64c9-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "351fb773-0669-41c0-aee8-0469f34d64c9" (UID: "351fb773-0669-41c0-aee8-0469f34d64c9"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.214567 5014 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.214612 5014 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/351fb773-0669-41c0-aee8-0469f34d64c9-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.686917 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.829442 5014 generic.go:334] "Generic (PLEG): container finished" podID="46e75f06-c8df-44b8-a6e4-8f663e8b0a1a" containerID="ef252e73b4755ebb79ca0372edd6145d6575e4965f3ab8414b00083c7d04ef30" exitCode=0 Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.829540 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.829587 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-erlang-cookie-secret\") pod \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.829601 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a","Type":"ContainerDied","Data":"ef252e73b4755ebb79ca0372edd6145d6575e4965f3ab8414b00083c7d04ef30"} Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.829626 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.829766 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-rabbitmq-confd\") pod \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.829791 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g85ss\" (UniqueName: \"kubernetes.io/projected/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-kube-api-access-g85ss\") pod \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.829892 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-rabbitmq-tls\") pod \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.829949 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-rabbitmq-plugins\") pod \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.829985 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-pod-info\") pod \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.830003 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-config-data\") pod \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.830025 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-rabbitmq-erlang-cookie\") pod \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.830045 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-plugins-conf\") pod \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.830087 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-server-conf\") pod \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\" (UID: \"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a\") " Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.830431 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"46e75f06-c8df-44b8-a6e4-8f663e8b0a1a","Type":"ContainerDied","Data":"7501ac43739d89a52c67369e86cd763ac003cab29148cd582786f315d5f67f7d"} Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.830444 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "46e75f06-c8df-44b8-a6e4-8f663e8b0a1a" (UID: "46e75f06-c8df-44b8-a6e4-8f663e8b0a1a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.830461 5014 scope.go:117] "RemoveContainer" containerID="ef252e73b4755ebb79ca0372edd6145d6575e4965f3ab8414b00083c7d04ef30" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.830515 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.831069 5014 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.831068 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "46e75f06-c8df-44b8-a6e4-8f663e8b0a1a" (UID: "46e75f06-c8df-44b8-a6e4-8f663e8b0a1a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.831528 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "46e75f06-c8df-44b8-a6e4-8f663e8b0a1a" (UID: "46e75f06-c8df-44b8-a6e4-8f663e8b0a1a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.833774 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-kube-api-access-g85ss" (OuterVolumeSpecName: "kube-api-access-g85ss") pod "46e75f06-c8df-44b8-a6e4-8f663e8b0a1a" (UID: "46e75f06-c8df-44b8-a6e4-8f663e8b0a1a"). InnerVolumeSpecName "kube-api-access-g85ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.834930 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "46e75f06-c8df-44b8-a6e4-8f663e8b0a1a" (UID: "46e75f06-c8df-44b8-a6e4-8f663e8b0a1a"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.850146 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "46e75f06-c8df-44b8-a6e4-8f663e8b0a1a" (UID: "46e75f06-c8df-44b8-a6e4-8f663e8b0a1a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.850371 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "46e75f06-c8df-44b8-a6e4-8f663e8b0a1a" (UID: "46e75f06-c8df-44b8-a6e4-8f663e8b0a1a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.856106 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-pod-info" (OuterVolumeSpecName: "pod-info") pod "46e75f06-c8df-44b8-a6e4-8f663e8b0a1a" (UID: "46e75f06-c8df-44b8-a6e4-8f663e8b0a1a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.859680 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-config-data" (OuterVolumeSpecName: "config-data") pod "46e75f06-c8df-44b8-a6e4-8f663e8b0a1a" (UID: "46e75f06-c8df-44b8-a6e4-8f663e8b0a1a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.886307 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-server-conf" (OuterVolumeSpecName: "server-conf") pod "46e75f06-c8df-44b8-a6e4-8f663e8b0a1a" (UID: "46e75f06-c8df-44b8-a6e4-8f663e8b0a1a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.932384 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g85ss\" (UniqueName: \"kubernetes.io/projected/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-kube-api-access-g85ss\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.932566 5014 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.932943 5014 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-pod-info\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.933034 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.933147 5014 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.933230 5014 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.933288 5014 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-server-conf\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.933372 5014 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.933474 5014 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.937321 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "46e75f06-c8df-44b8-a6e4-8f663e8b0a1a" (UID: "46e75f06-c8df-44b8-a6e4-8f663e8b0a1a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.958662 5014 scope.go:117] "RemoveContainer" containerID="0d1061f7a0ea20558bdded3c641a52419a84163e5db3bf1d2a4fd9e2cd9544e7" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.958770 5014 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.962434 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.972497 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.980342 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 04:57:02 crc kubenswrapper[5014]: E0228 04:57:02.980887 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e75f06-c8df-44b8-a6e4-8f663e8b0a1a" containerName="setup-container" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.980981 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e75f06-c8df-44b8-a6e4-8f663e8b0a1a" containerName="setup-container" Feb 28 04:57:02 crc kubenswrapper[5014]: E0228 04:57:02.981055 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="351fb773-0669-41c0-aee8-0469f34d64c9" containerName="setup-container" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.981130 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="351fb773-0669-41c0-aee8-0469f34d64c9" containerName="setup-container" Feb 28 04:57:02 crc kubenswrapper[5014]: E0228 04:57:02.981185 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46e75f06-c8df-44b8-a6e4-8f663e8b0a1a" containerName="rabbitmq" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.981241 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="46e75f06-c8df-44b8-a6e4-8f663e8b0a1a" containerName="rabbitmq" Feb 28 04:57:02 crc kubenswrapper[5014]: E0228 04:57:02.981301 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="351fb773-0669-41c0-aee8-0469f34d64c9" containerName="rabbitmq" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.981398 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="351fb773-0669-41c0-aee8-0469f34d64c9" containerName="rabbitmq" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.981629 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="46e75f06-c8df-44b8-a6e4-8f663e8b0a1a" containerName="rabbitmq" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.981699 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="351fb773-0669-41c0-aee8-0469f34d64c9" containerName="rabbitmq" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.982889 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.984918 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.985708 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.985895 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.986124 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.986241 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-tcg9x" Feb 28 04:57:02 crc kubenswrapper[5014]: I0228 04:57:02.986272 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.003630 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.030033 5014 scope.go:117] "RemoveContainer" containerID="ef252e73b4755ebb79ca0372edd6145d6575e4965f3ab8414b00083c7d04ef30" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.030307 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 28 04:57:03 crc kubenswrapper[5014]: E0228 04:57:03.033460 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef252e73b4755ebb79ca0372edd6145d6575e4965f3ab8414b00083c7d04ef30\": container with ID starting with ef252e73b4755ebb79ca0372edd6145d6575e4965f3ab8414b00083c7d04ef30 not found: ID does not exist" containerID="ef252e73b4755ebb79ca0372edd6145d6575e4965f3ab8414b00083c7d04ef30" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.033505 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef252e73b4755ebb79ca0372edd6145d6575e4965f3ab8414b00083c7d04ef30"} err="failed to get container status \"ef252e73b4755ebb79ca0372edd6145d6575e4965f3ab8414b00083c7d04ef30\": rpc error: code = NotFound desc = could not find container \"ef252e73b4755ebb79ca0372edd6145d6575e4965f3ab8414b00083c7d04ef30\": container with ID starting with ef252e73b4755ebb79ca0372edd6145d6575e4965f3ab8414b00083c7d04ef30 not found: ID does not exist" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.033546 5014 scope.go:117] "RemoveContainer" containerID="0d1061f7a0ea20558bdded3c641a52419a84163e5db3bf1d2a4fd9e2cd9544e7" Feb 28 04:57:03 crc kubenswrapper[5014]: E0228 04:57:03.033861 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d1061f7a0ea20558bdded3c641a52419a84163e5db3bf1d2a4fd9e2cd9544e7\": container with ID starting with 0d1061f7a0ea20558bdded3c641a52419a84163e5db3bf1d2a4fd9e2cd9544e7 not found: ID does not exist" containerID="0d1061f7a0ea20558bdded3c641a52419a84163e5db3bf1d2a4fd9e2cd9544e7" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.033893 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d1061f7a0ea20558bdded3c641a52419a84163e5db3bf1d2a4fd9e2cd9544e7"} err="failed to get container status \"0d1061f7a0ea20558bdded3c641a52419a84163e5db3bf1d2a4fd9e2cd9544e7\": rpc error: code = NotFound desc = could not find container \"0d1061f7a0ea20558bdded3c641a52419a84163e5db3bf1d2a4fd9e2cd9544e7\": container with ID starting with 0d1061f7a0ea20558bdded3c641a52419a84163e5db3bf1d2a4fd9e2cd9544e7 not found: ID does not exist" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.034842 5014 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.034871 5014 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.136225 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b0d0bd3-ff23-4098-93fb-debf7681cfce-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.136394 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b0d0bd3-ff23-4098-93fb-debf7681cfce-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.136481 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q4zt\" (UniqueName: \"kubernetes.io/projected/7b0d0bd3-ff23-4098-93fb-debf7681cfce-kube-api-access-2q4zt\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.136565 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b0d0bd3-ff23-4098-93fb-debf7681cfce-config-data\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.136647 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b0d0bd3-ff23-4098-93fb-debf7681cfce-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.136697 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b0d0bd3-ff23-4098-93fb-debf7681cfce-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.136715 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b0d0bd3-ff23-4098-93fb-debf7681cfce-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.136822 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b0d0bd3-ff23-4098-93fb-debf7681cfce-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.136844 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b0d0bd3-ff23-4098-93fb-debf7681cfce-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.136972 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.137043 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b0d0bd3-ff23-4098-93fb-debf7681cfce-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.164559 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.173970 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.192109 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.193864 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.195797 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.195912 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.196042 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.196500 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.196966 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-679gc" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.197262 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.199639 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.213835 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.238781 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q4zt\" (UniqueName: \"kubernetes.io/projected/7b0d0bd3-ff23-4098-93fb-debf7681cfce-kube-api-access-2q4zt\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.238867 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b0d0bd3-ff23-4098-93fb-debf7681cfce-config-data\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.238900 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b0d0bd3-ff23-4098-93fb-debf7681cfce-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.238923 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b0d0bd3-ff23-4098-93fb-debf7681cfce-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.238939 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b0d0bd3-ff23-4098-93fb-debf7681cfce-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.238970 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b0d0bd3-ff23-4098-93fb-debf7681cfce-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.238986 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b0d0bd3-ff23-4098-93fb-debf7681cfce-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.239033 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.239060 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b0d0bd3-ff23-4098-93fb-debf7681cfce-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.239094 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b0d0bd3-ff23-4098-93fb-debf7681cfce-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.239125 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b0d0bd3-ff23-4098-93fb-debf7681cfce-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.239618 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7b0d0bd3-ff23-4098-93fb-debf7681cfce-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.240689 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7b0d0bd3-ff23-4098-93fb-debf7681cfce-config-data\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.240788 5014 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.240998 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7b0d0bd3-ff23-4098-93fb-debf7681cfce-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.241528 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7b0d0bd3-ff23-4098-93fb-debf7681cfce-server-conf\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.242999 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7b0d0bd3-ff23-4098-93fb-debf7681cfce-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.248427 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7b0d0bd3-ff23-4098-93fb-debf7681cfce-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.250055 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7b0d0bd3-ff23-4098-93fb-debf7681cfce-pod-info\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.250339 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7b0d0bd3-ff23-4098-93fb-debf7681cfce-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.250628 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7b0d0bd3-ff23-4098-93fb-debf7681cfce-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.259565 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q4zt\" (UniqueName: \"kubernetes.io/projected/7b0d0bd3-ff23-4098-93fb-debf7681cfce-kube-api-access-2q4zt\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.276164 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"7b0d0bd3-ff23-4098-93fb-debf7681cfce\") " pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.340490 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3df93ff6-00cf-4c7f-8971-6d1d78795456-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.340545 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3df93ff6-00cf-4c7f-8971-6d1d78795456-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.340587 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3df93ff6-00cf-4c7f-8971-6d1d78795456-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.340681 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3df93ff6-00cf-4c7f-8971-6d1d78795456-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.340708 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t6s2\" (UniqueName: \"kubernetes.io/projected/3df93ff6-00cf-4c7f-8971-6d1d78795456-kube-api-access-9t6s2\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.340748 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3df93ff6-00cf-4c7f-8971-6d1d78795456-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.340995 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3df93ff6-00cf-4c7f-8971-6d1d78795456-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.341083 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.341130 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3df93ff6-00cf-4c7f-8971-6d1d78795456-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.341174 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3df93ff6-00cf-4c7f-8971-6d1d78795456-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.341208 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3df93ff6-00cf-4c7f-8971-6d1d78795456-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.347557 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.442658 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.442971 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3df93ff6-00cf-4c7f-8971-6d1d78795456-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.443004 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3df93ff6-00cf-4c7f-8971-6d1d78795456-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.443028 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3df93ff6-00cf-4c7f-8971-6d1d78795456-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.443099 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3df93ff6-00cf-4c7f-8971-6d1d78795456-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.443128 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3df93ff6-00cf-4c7f-8971-6d1d78795456-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.443162 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3df93ff6-00cf-4c7f-8971-6d1d78795456-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.443225 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3df93ff6-00cf-4c7f-8971-6d1d78795456-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.443253 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t6s2\" (UniqueName: \"kubernetes.io/projected/3df93ff6-00cf-4c7f-8971-6d1d78795456-kube-api-access-9t6s2\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.443298 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3df93ff6-00cf-4c7f-8971-6d1d78795456-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.443371 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3df93ff6-00cf-4c7f-8971-6d1d78795456-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.444708 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3df93ff6-00cf-4c7f-8971-6d1d78795456-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.444869 5014 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.446440 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3df93ff6-00cf-4c7f-8971-6d1d78795456-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.448278 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3df93ff6-00cf-4c7f-8971-6d1d78795456-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.448651 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3df93ff6-00cf-4c7f-8971-6d1d78795456-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.450599 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3df93ff6-00cf-4c7f-8971-6d1d78795456-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.450743 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3df93ff6-00cf-4c7f-8971-6d1d78795456-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.451043 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3df93ff6-00cf-4c7f-8971-6d1d78795456-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.455649 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3df93ff6-00cf-4c7f-8971-6d1d78795456-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.468854 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3df93ff6-00cf-4c7f-8971-6d1d78795456-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.470712 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t6s2\" (UniqueName: \"kubernetes.io/projected/3df93ff6-00cf-4c7f-8971-6d1d78795456-kube-api-access-9t6s2\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.477609 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3df93ff6-00cf-4c7f-8971-6d1d78795456\") " pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.515223 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.829550 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 28 04:57:03 crc kubenswrapper[5014]: I0228 04:57:03.842910 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7b0d0bd3-ff23-4098-93fb-debf7681cfce","Type":"ContainerStarted","Data":"3cc62de3b52449d00cd8beac10416808dd8eebb4a371c6a11aa65a18dc69920b"} Feb 28 04:57:04 crc kubenswrapper[5014]: W0228 04:57:04.009589 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3df93ff6_00cf_4c7f_8971_6d1d78795456.slice/crio-2952eeffbfa639b5fde9b3a54e9f174b240fe96641266186f2260e8cc960f2a7 WatchSource:0}: Error finding container 2952eeffbfa639b5fde9b3a54e9f174b240fe96641266186f2260e8cc960f2a7: Status 404 returned error can't find the container with id 2952eeffbfa639b5fde9b3a54e9f174b240fe96641266186f2260e8cc960f2a7 Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.018143 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.074843 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-kfmhd"] Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.076504 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.081117 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.095909 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-kfmhd"] Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.160959 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-kfmhd\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.161027 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-kfmhd\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.161104 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-config\") pod \"dnsmasq-dns-79bd4cc8c9-kfmhd\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.161146 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-kfmhd\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.161178 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-kfmhd\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.161277 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-kfmhd\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.161317 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ztlw\" (UniqueName: \"kubernetes.io/projected/e35afd43-f3bc-4344-9771-6481557f1bc5-kube-api-access-9ztlw\") pod \"dnsmasq-dns-79bd4cc8c9-kfmhd\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.186021 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="351fb773-0669-41c0-aee8-0469f34d64c9" path="/var/lib/kubelet/pods/351fb773-0669-41c0-aee8-0469f34d64c9/volumes" Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.186960 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46e75f06-c8df-44b8-a6e4-8f663e8b0a1a" path="/var/lib/kubelet/pods/46e75f06-c8df-44b8-a6e4-8f663e8b0a1a/volumes" Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.262634 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-kfmhd\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.263073 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ztlw\" (UniqueName: \"kubernetes.io/projected/e35afd43-f3bc-4344-9771-6481557f1bc5-kube-api-access-9ztlw\") pod \"dnsmasq-dns-79bd4cc8c9-kfmhd\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.263106 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-kfmhd\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.263205 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-kfmhd\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.263303 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-config\") pod \"dnsmasq-dns-79bd4cc8c9-kfmhd\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.263381 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-kfmhd\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.263417 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-kfmhd\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.264466 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-ovsdbserver-sb\") pod \"dnsmasq-dns-79bd4cc8c9-kfmhd\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.265581 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-config\") pod \"dnsmasq-dns-79bd4cc8c9-kfmhd\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.267358 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-openstack-edpm-ipam\") pod \"dnsmasq-dns-79bd4cc8c9-kfmhd\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.268499 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-dns-swift-storage-0\") pod \"dnsmasq-dns-79bd4cc8c9-kfmhd\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.268937 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-dns-svc\") pod \"dnsmasq-dns-79bd4cc8c9-kfmhd\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:04 crc kubenswrapper[5014]: I0228 04:57:04.269121 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-ovsdbserver-nb\") pod \"dnsmasq-dns-79bd4cc8c9-kfmhd\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:05 crc kubenswrapper[5014]: I0228 04:57:04.289031 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ztlw\" (UniqueName: \"kubernetes.io/projected/e35afd43-f3bc-4344-9771-6481557f1bc5-kube-api-access-9ztlw\") pod \"dnsmasq-dns-79bd4cc8c9-kfmhd\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:05 crc kubenswrapper[5014]: I0228 04:57:04.399662 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:05 crc kubenswrapper[5014]: I0228 04:57:04.855473 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3df93ff6-00cf-4c7f-8971-6d1d78795456","Type":"ContainerStarted","Data":"2952eeffbfa639b5fde9b3a54e9f174b240fe96641266186f2260e8cc960f2a7"} Feb 28 04:57:05 crc kubenswrapper[5014]: I0228 04:57:05.504573 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-kfmhd"] Feb 28 04:57:05 crc kubenswrapper[5014]: W0228 04:57:05.596617 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode35afd43_f3bc_4344_9771_6481557f1bc5.slice/crio-eee1f46ff1fc2d86ee2433202bff6a295a9b680f26fe569afae9ede407a1e503 WatchSource:0}: Error finding container eee1f46ff1fc2d86ee2433202bff6a295a9b680f26fe569afae9ede407a1e503: Status 404 returned error can't find the container with id eee1f46ff1fc2d86ee2433202bff6a295a9b680f26fe569afae9ede407a1e503 Feb 28 04:57:05 crc kubenswrapper[5014]: I0228 04:57:05.865029 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7b0d0bd3-ff23-4098-93fb-debf7681cfce","Type":"ContainerStarted","Data":"de23dd85af6120a799d3664e483f407f30d037148d485717c97da3c43a4f67bf"} Feb 28 04:57:05 crc kubenswrapper[5014]: I0228 04:57:05.868281 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" event={"ID":"e35afd43-f3bc-4344-9771-6481557f1bc5","Type":"ContainerStarted","Data":"104ba5f76baf47cb10671ab115803744b3765a64def15e08702acc5c0ebf3909"} Feb 28 04:57:05 crc kubenswrapper[5014]: I0228 04:57:05.868320 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" event={"ID":"e35afd43-f3bc-4344-9771-6481557f1bc5","Type":"ContainerStarted","Data":"eee1f46ff1fc2d86ee2433202bff6a295a9b680f26fe569afae9ede407a1e503"} Feb 28 04:57:05 crc kubenswrapper[5014]: I0228 04:57:05.871766 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3df93ff6-00cf-4c7f-8971-6d1d78795456","Type":"ContainerStarted","Data":"a3daf2dc8dea16c3fe33d5a265e4654fdd20dbcec9ede05e1c641561766b9bd5"} Feb 28 04:57:06 crc kubenswrapper[5014]: I0228 04:57:06.887864 5014 generic.go:334] "Generic (PLEG): container finished" podID="e35afd43-f3bc-4344-9771-6481557f1bc5" containerID="104ba5f76baf47cb10671ab115803744b3765a64def15e08702acc5c0ebf3909" exitCode=0 Feb 28 04:57:06 crc kubenswrapper[5014]: I0228 04:57:06.887949 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" event={"ID":"e35afd43-f3bc-4344-9771-6481557f1bc5","Type":"ContainerDied","Data":"104ba5f76baf47cb10671ab115803744b3765a64def15e08702acc5c0ebf3909"} Feb 28 04:57:06 crc kubenswrapper[5014]: I0228 04:57:06.888352 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:06 crc kubenswrapper[5014]: I0228 04:57:06.888386 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" event={"ID":"e35afd43-f3bc-4344-9771-6481557f1bc5","Type":"ContainerStarted","Data":"6213171d25da845b4d38dcec63afcf28a184586fd1de2f2e9ba4aa9101eff9a0"} Feb 28 04:57:06 crc kubenswrapper[5014]: I0228 04:57:06.926555 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" podStartSLOduration=2.92653498 podStartE2EDuration="2.92653498s" podCreationTimestamp="2026-02-28 04:57:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:57:06.922694896 +0000 UTC m=+1415.592820886" watchObservedRunningTime="2026-02-28 04:57:06.92653498 +0000 UTC m=+1415.596660900" Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.401924 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.473119 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-4b7rx"] Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.473397 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" podUID="0c076180-9ead-4605-84e7-d0d920d19cdb" containerName="dnsmasq-dns" containerID="cri-o://45beb3fb190864d244f2e3d73b956fba32e04bcb19e0cb03a3e97a246d9047eb" gracePeriod=10 Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.678786 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55478c4467-zxf77"] Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.686202 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.738401 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-zxf77"] Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.818514 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67c40633-8133-430b-8528-2aab67995b17-config\") pod \"dnsmasq-dns-55478c4467-zxf77\" (UID: \"67c40633-8133-430b-8528-2aab67995b17\") " pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.818794 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mwmh\" (UniqueName: \"kubernetes.io/projected/67c40633-8133-430b-8528-2aab67995b17-kube-api-access-4mwmh\") pod \"dnsmasq-dns-55478c4467-zxf77\" (UID: \"67c40633-8133-430b-8528-2aab67995b17\") " pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.818846 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67c40633-8133-430b-8528-2aab67995b17-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-zxf77\" (UID: \"67c40633-8133-430b-8528-2aab67995b17\") " pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.818870 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67c40633-8133-430b-8528-2aab67995b17-dns-svc\") pod \"dnsmasq-dns-55478c4467-zxf77\" (UID: \"67c40633-8133-430b-8528-2aab67995b17\") " pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.818887 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67c40633-8133-430b-8528-2aab67995b17-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-zxf77\" (UID: \"67c40633-8133-430b-8528-2aab67995b17\") " pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.819378 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67c40633-8133-430b-8528-2aab67995b17-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-zxf77\" (UID: \"67c40633-8133-430b-8528-2aab67995b17\") " pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.819488 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/67c40633-8133-430b-8528-2aab67995b17-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-zxf77\" (UID: \"67c40633-8133-430b-8528-2aab67995b17\") " pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.921425 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67c40633-8133-430b-8528-2aab67995b17-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-zxf77\" (UID: \"67c40633-8133-430b-8528-2aab67995b17\") " pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.921470 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67c40633-8133-430b-8528-2aab67995b17-dns-svc\") pod \"dnsmasq-dns-55478c4467-zxf77\" (UID: \"67c40633-8133-430b-8528-2aab67995b17\") " pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.921487 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67c40633-8133-430b-8528-2aab67995b17-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-zxf77\" (UID: \"67c40633-8133-430b-8528-2aab67995b17\") " pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.921569 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67c40633-8133-430b-8528-2aab67995b17-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-zxf77\" (UID: \"67c40633-8133-430b-8528-2aab67995b17\") " pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.921601 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/67c40633-8133-430b-8528-2aab67995b17-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-zxf77\" (UID: \"67c40633-8133-430b-8528-2aab67995b17\") " pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.921635 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67c40633-8133-430b-8528-2aab67995b17-config\") pod \"dnsmasq-dns-55478c4467-zxf77\" (UID: \"67c40633-8133-430b-8528-2aab67995b17\") " pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.921659 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mwmh\" (UniqueName: \"kubernetes.io/projected/67c40633-8133-430b-8528-2aab67995b17-kube-api-access-4mwmh\") pod \"dnsmasq-dns-55478c4467-zxf77\" (UID: \"67c40633-8133-430b-8528-2aab67995b17\") " pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.922632 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/67c40633-8133-430b-8528-2aab67995b17-ovsdbserver-nb\") pod \"dnsmasq-dns-55478c4467-zxf77\" (UID: \"67c40633-8133-430b-8528-2aab67995b17\") " pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.923151 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/67c40633-8133-430b-8528-2aab67995b17-dns-svc\") pod \"dnsmasq-dns-55478c4467-zxf77\" (UID: \"67c40633-8133-430b-8528-2aab67995b17\") " pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.923659 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/67c40633-8133-430b-8528-2aab67995b17-ovsdbserver-sb\") pod \"dnsmasq-dns-55478c4467-zxf77\" (UID: \"67c40633-8133-430b-8528-2aab67995b17\") " pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.924767 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/67c40633-8133-430b-8528-2aab67995b17-openstack-edpm-ipam\") pod \"dnsmasq-dns-55478c4467-zxf77\" (UID: \"67c40633-8133-430b-8528-2aab67995b17\") " pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.926027 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/67c40633-8133-430b-8528-2aab67995b17-dns-swift-storage-0\") pod \"dnsmasq-dns-55478c4467-zxf77\" (UID: \"67c40633-8133-430b-8528-2aab67995b17\") " pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.927496 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/67c40633-8133-430b-8528-2aab67995b17-config\") pod \"dnsmasq-dns-55478c4467-zxf77\" (UID: \"67c40633-8133-430b-8528-2aab67995b17\") " pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.944560 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mwmh\" (UniqueName: \"kubernetes.io/projected/67c40633-8133-430b-8528-2aab67995b17-kube-api-access-4mwmh\") pod \"dnsmasq-dns-55478c4467-zxf77\" (UID: \"67c40633-8133-430b-8528-2aab67995b17\") " pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.990111 5014 generic.go:334] "Generic (PLEG): container finished" podID="0c076180-9ead-4605-84e7-d0d920d19cdb" containerID="45beb3fb190864d244f2e3d73b956fba32e04bcb19e0cb03a3e97a246d9047eb" exitCode=0 Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.990167 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" event={"ID":"0c076180-9ead-4605-84e7-d0d920d19cdb","Type":"ContainerDied","Data":"45beb3fb190864d244f2e3d73b956fba32e04bcb19e0cb03a3e97a246d9047eb"} Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.990194 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" event={"ID":"0c076180-9ead-4605-84e7-d0d920d19cdb","Type":"ContainerDied","Data":"090b73ae7121ff0e2e0e8babf5acf5c269491a77ba512cf383e5c628b9d76304"} Feb 28 04:57:14 crc kubenswrapper[5014]: I0228 04:57:14.990206 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="090b73ae7121ff0e2e0e8babf5acf5c269491a77ba512cf383e5c628b9d76304" Feb 28 04:57:15 crc kubenswrapper[5014]: I0228 04:57:15.018096 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:15 crc kubenswrapper[5014]: I0228 04:57:15.178700 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" Feb 28 04:57:15 crc kubenswrapper[5014]: I0228 04:57:15.225073 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-dns-swift-storage-0\") pod \"0c076180-9ead-4605-84e7-d0d920d19cdb\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " Feb 28 04:57:15 crc kubenswrapper[5014]: I0228 04:57:15.225325 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z9729\" (UniqueName: \"kubernetes.io/projected/0c076180-9ead-4605-84e7-d0d920d19cdb-kube-api-access-z9729\") pod \"0c076180-9ead-4605-84e7-d0d920d19cdb\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " Feb 28 04:57:15 crc kubenswrapper[5014]: I0228 04:57:15.225393 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-config\") pod \"0c076180-9ead-4605-84e7-d0d920d19cdb\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " Feb 28 04:57:15 crc kubenswrapper[5014]: I0228 04:57:15.225409 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-ovsdbserver-sb\") pod \"0c076180-9ead-4605-84e7-d0d920d19cdb\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " Feb 28 04:57:15 crc kubenswrapper[5014]: I0228 04:57:15.225523 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-ovsdbserver-nb\") pod \"0c076180-9ead-4605-84e7-d0d920d19cdb\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " Feb 28 04:57:15 crc kubenswrapper[5014]: I0228 04:57:15.225576 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-dns-svc\") pod \"0c076180-9ead-4605-84e7-d0d920d19cdb\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " Feb 28 04:57:15 crc kubenswrapper[5014]: I0228 04:57:15.232596 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c076180-9ead-4605-84e7-d0d920d19cdb-kube-api-access-z9729" (OuterVolumeSpecName: "kube-api-access-z9729") pod "0c076180-9ead-4605-84e7-d0d920d19cdb" (UID: "0c076180-9ead-4605-84e7-d0d920d19cdb"). InnerVolumeSpecName "kube-api-access-z9729". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:57:15 crc kubenswrapper[5014]: I0228 04:57:15.280415 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "0c076180-9ead-4605-84e7-d0d920d19cdb" (UID: "0c076180-9ead-4605-84e7-d0d920d19cdb"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:57:15 crc kubenswrapper[5014]: I0228 04:57:15.281725 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "0c076180-9ead-4605-84e7-d0d920d19cdb" (UID: "0c076180-9ead-4605-84e7-d0d920d19cdb"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:57:15 crc kubenswrapper[5014]: I0228 04:57:15.297062 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0c076180-9ead-4605-84e7-d0d920d19cdb" (UID: "0c076180-9ead-4605-84e7-d0d920d19cdb"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:57:15 crc kubenswrapper[5014]: E0228 04:57:15.308190 5014 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-config podName:0c076180-9ead-4605-84e7-d0d920d19cdb nodeName:}" failed. No retries permitted until 2026-02-28 04:57:15.808162215 +0000 UTC m=+1424.478288125 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "config" (UniqueName: "kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-config") pod "0c076180-9ead-4605-84e7-d0d920d19cdb" (UID: "0c076180-9ead-4605-84e7-d0d920d19cdb") : error deleting /var/lib/kubelet/pods/0c076180-9ead-4605-84e7-d0d920d19cdb/volume-subpaths: remove /var/lib/kubelet/pods/0c076180-9ead-4605-84e7-d0d920d19cdb/volume-subpaths: no such file or directory Feb 28 04:57:15 crc kubenswrapper[5014]: I0228 04:57:15.308469 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0c076180-9ead-4605-84e7-d0d920d19cdb" (UID: "0c076180-9ead-4605-84e7-d0d920d19cdb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:57:15 crc kubenswrapper[5014]: I0228 04:57:15.327388 5014 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:15 crc kubenswrapper[5014]: I0228 04:57:15.327417 5014 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:15 crc kubenswrapper[5014]: I0228 04:57:15.327428 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z9729\" (UniqueName: \"kubernetes.io/projected/0c076180-9ead-4605-84e7-d0d920d19cdb-kube-api-access-z9729\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:15 crc kubenswrapper[5014]: I0228 04:57:15.327441 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:15 crc kubenswrapper[5014]: I0228 04:57:15.327450 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:15 crc kubenswrapper[5014]: I0228 04:57:15.592556 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55478c4467-zxf77"] Feb 28 04:57:15 crc kubenswrapper[5014]: I0228 04:57:15.711237 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:57:15 crc kubenswrapper[5014]: I0228 04:57:15.711282 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:57:15 crc kubenswrapper[5014]: I0228 04:57:15.833440 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-config\") pod \"0c076180-9ead-4605-84e7-d0d920d19cdb\" (UID: \"0c076180-9ead-4605-84e7-d0d920d19cdb\") " Feb 28 04:57:15 crc kubenswrapper[5014]: I0228 04:57:15.835267 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-config" (OuterVolumeSpecName: "config") pod "0c076180-9ead-4605-84e7-d0d920d19cdb" (UID: "0c076180-9ead-4605-84e7-d0d920d19cdb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:57:15 crc kubenswrapper[5014]: I0228 04:57:15.936507 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c076180-9ead-4605-84e7-d0d920d19cdb-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:16 crc kubenswrapper[5014]: I0228 04:57:16.000881 5014 generic.go:334] "Generic (PLEG): container finished" podID="67c40633-8133-430b-8528-2aab67995b17" containerID="8082fa92e30c6491db04a62979f4d896f4f1b3fe98a7ba45f9310d295d19b079" exitCode=0 Feb 28 04:57:16 crc kubenswrapper[5014]: I0228 04:57:16.000954 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-89c5cd4d5-4b7rx" Feb 28 04:57:16 crc kubenswrapper[5014]: I0228 04:57:16.000943 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-zxf77" event={"ID":"67c40633-8133-430b-8528-2aab67995b17","Type":"ContainerDied","Data":"8082fa92e30c6491db04a62979f4d896f4f1b3fe98a7ba45f9310d295d19b079"} Feb 28 04:57:16 crc kubenswrapper[5014]: I0228 04:57:16.001124 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-zxf77" event={"ID":"67c40633-8133-430b-8528-2aab67995b17","Type":"ContainerStarted","Data":"e0dd1cd8ecbc16ae5d84c57bd19b30917c3712e1a54cb58f8f7e419515248fa0"} Feb 28 04:57:16 crc kubenswrapper[5014]: I0228 04:57:16.256496 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-4b7rx"] Feb 28 04:57:16 crc kubenswrapper[5014]: I0228 04:57:16.272569 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-89c5cd4d5-4b7rx"] Feb 28 04:57:17 crc kubenswrapper[5014]: I0228 04:57:17.014308 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55478c4467-zxf77" event={"ID":"67c40633-8133-430b-8528-2aab67995b17","Type":"ContainerStarted","Data":"3e61ac206fbbaeb1ee04ac9a1cd873b788782aa0e14f24f5feea7e8983831a48"} Feb 28 04:57:17 crc kubenswrapper[5014]: I0228 04:57:17.014658 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:17 crc kubenswrapper[5014]: I0228 04:57:17.056035 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55478c4467-zxf77" podStartSLOduration=3.056005737 podStartE2EDuration="3.056005737s" podCreationTimestamp="2026-02-28 04:57:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:57:17.042596375 +0000 UTC m=+1425.712722295" watchObservedRunningTime="2026-02-28 04:57:17.056005737 +0000 UTC m=+1425.726131717" Feb 28 04:57:18 crc kubenswrapper[5014]: I0228 04:57:18.185270 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c076180-9ead-4605-84e7-d0d920d19cdb" path="/var/lib/kubelet/pods/0c076180-9ead-4605-84e7-d0d920d19cdb/volumes" Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.020244 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55478c4467-zxf77" Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.089409 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-kfmhd"] Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.089636 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" podUID="e35afd43-f3bc-4344-9771-6481557f1bc5" containerName="dnsmasq-dns" containerID="cri-o://6213171d25da845b4d38dcec63afcf28a184586fd1de2f2e9ba4aa9101eff9a0" gracePeriod=10 Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.626683 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.777845 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-config\") pod \"e35afd43-f3bc-4344-9771-6481557f1bc5\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.778098 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-ovsdbserver-sb\") pod \"e35afd43-f3bc-4344-9771-6481557f1bc5\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.778157 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-ovsdbserver-nb\") pod \"e35afd43-f3bc-4344-9771-6481557f1bc5\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.778253 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-openstack-edpm-ipam\") pod \"e35afd43-f3bc-4344-9771-6481557f1bc5\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.778338 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ztlw\" (UniqueName: \"kubernetes.io/projected/e35afd43-f3bc-4344-9771-6481557f1bc5-kube-api-access-9ztlw\") pod \"e35afd43-f3bc-4344-9771-6481557f1bc5\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.778403 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-dns-svc\") pod \"e35afd43-f3bc-4344-9771-6481557f1bc5\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.778430 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-dns-swift-storage-0\") pod \"e35afd43-f3bc-4344-9771-6481557f1bc5\" (UID: \"e35afd43-f3bc-4344-9771-6481557f1bc5\") " Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.784260 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e35afd43-f3bc-4344-9771-6481557f1bc5-kube-api-access-9ztlw" (OuterVolumeSpecName: "kube-api-access-9ztlw") pod "e35afd43-f3bc-4344-9771-6481557f1bc5" (UID: "e35afd43-f3bc-4344-9771-6481557f1bc5"). InnerVolumeSpecName "kube-api-access-9ztlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.835561 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-config" (OuterVolumeSpecName: "config") pod "e35afd43-f3bc-4344-9771-6481557f1bc5" (UID: "e35afd43-f3bc-4344-9771-6481557f1bc5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.848310 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e35afd43-f3bc-4344-9771-6481557f1bc5" (UID: "e35afd43-f3bc-4344-9771-6481557f1bc5"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.849455 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e35afd43-f3bc-4344-9771-6481557f1bc5" (UID: "e35afd43-f3bc-4344-9771-6481557f1bc5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.852584 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "e35afd43-f3bc-4344-9771-6481557f1bc5" (UID: "e35afd43-f3bc-4344-9771-6481557f1bc5"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.856693 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e35afd43-f3bc-4344-9771-6481557f1bc5" (UID: "e35afd43-f3bc-4344-9771-6481557f1bc5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.868870 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e35afd43-f3bc-4344-9771-6481557f1bc5" (UID: "e35afd43-f3bc-4344-9771-6481557f1bc5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.880327 5014 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.880360 5014 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.880372 5014 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-config\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.880381 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.880393 5014 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.880401 5014 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/e35afd43-f3bc-4344-9771-6481557f1bc5-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:25 crc kubenswrapper[5014]: I0228 04:57:25.880409 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ztlw\" (UniqueName: \"kubernetes.io/projected/e35afd43-f3bc-4344-9771-6481557f1bc5-kube-api-access-9ztlw\") on node \"crc\" DevicePath \"\"" Feb 28 04:57:26 crc kubenswrapper[5014]: I0228 04:57:26.127081 5014 generic.go:334] "Generic (PLEG): container finished" podID="e35afd43-f3bc-4344-9771-6481557f1bc5" containerID="6213171d25da845b4d38dcec63afcf28a184586fd1de2f2e9ba4aa9101eff9a0" exitCode=0 Feb 28 04:57:26 crc kubenswrapper[5014]: I0228 04:57:26.127141 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" event={"ID":"e35afd43-f3bc-4344-9771-6481557f1bc5","Type":"ContainerDied","Data":"6213171d25da845b4d38dcec63afcf28a184586fd1de2f2e9ba4aa9101eff9a0"} Feb 28 04:57:26 crc kubenswrapper[5014]: I0228 04:57:26.127169 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" event={"ID":"e35afd43-f3bc-4344-9771-6481557f1bc5","Type":"ContainerDied","Data":"eee1f46ff1fc2d86ee2433202bff6a295a9b680f26fe569afae9ede407a1e503"} Feb 28 04:57:26 crc kubenswrapper[5014]: I0228 04:57:26.127187 5014 scope.go:117] "RemoveContainer" containerID="6213171d25da845b4d38dcec63afcf28a184586fd1de2f2e9ba4aa9101eff9a0" Feb 28 04:57:26 crc kubenswrapper[5014]: I0228 04:57:26.127210 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-79bd4cc8c9-kfmhd" Feb 28 04:57:26 crc kubenswrapper[5014]: I0228 04:57:26.147794 5014 scope.go:117] "RemoveContainer" containerID="104ba5f76baf47cb10671ab115803744b3765a64def15e08702acc5c0ebf3909" Feb 28 04:57:26 crc kubenswrapper[5014]: I0228 04:57:26.177073 5014 scope.go:117] "RemoveContainer" containerID="6213171d25da845b4d38dcec63afcf28a184586fd1de2f2e9ba4aa9101eff9a0" Feb 28 04:57:26 crc kubenswrapper[5014]: E0228 04:57:26.177849 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6213171d25da845b4d38dcec63afcf28a184586fd1de2f2e9ba4aa9101eff9a0\": container with ID starting with 6213171d25da845b4d38dcec63afcf28a184586fd1de2f2e9ba4aa9101eff9a0 not found: ID does not exist" containerID="6213171d25da845b4d38dcec63afcf28a184586fd1de2f2e9ba4aa9101eff9a0" Feb 28 04:57:26 crc kubenswrapper[5014]: I0228 04:57:26.177942 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6213171d25da845b4d38dcec63afcf28a184586fd1de2f2e9ba4aa9101eff9a0"} err="failed to get container status \"6213171d25da845b4d38dcec63afcf28a184586fd1de2f2e9ba4aa9101eff9a0\": rpc error: code = NotFound desc = could not find container \"6213171d25da845b4d38dcec63afcf28a184586fd1de2f2e9ba4aa9101eff9a0\": container with ID starting with 6213171d25da845b4d38dcec63afcf28a184586fd1de2f2e9ba4aa9101eff9a0 not found: ID does not exist" Feb 28 04:57:26 crc kubenswrapper[5014]: I0228 04:57:26.177981 5014 scope.go:117] "RemoveContainer" containerID="104ba5f76baf47cb10671ab115803744b3765a64def15e08702acc5c0ebf3909" Feb 28 04:57:26 crc kubenswrapper[5014]: E0228 04:57:26.178527 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"104ba5f76baf47cb10671ab115803744b3765a64def15e08702acc5c0ebf3909\": container with ID starting with 104ba5f76baf47cb10671ab115803744b3765a64def15e08702acc5c0ebf3909 not found: ID does not exist" containerID="104ba5f76baf47cb10671ab115803744b3765a64def15e08702acc5c0ebf3909" Feb 28 04:57:26 crc kubenswrapper[5014]: I0228 04:57:26.178617 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"104ba5f76baf47cb10671ab115803744b3765a64def15e08702acc5c0ebf3909"} err="failed to get container status \"104ba5f76baf47cb10671ab115803744b3765a64def15e08702acc5c0ebf3909\": rpc error: code = NotFound desc = could not find container \"104ba5f76baf47cb10671ab115803744b3765a64def15e08702acc5c0ebf3909\": container with ID starting with 104ba5f76baf47cb10671ab115803744b3765a64def15e08702acc5c0ebf3909 not found: ID does not exist" Feb 28 04:57:26 crc kubenswrapper[5014]: I0228 04:57:26.190538 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-kfmhd"] Feb 28 04:57:26 crc kubenswrapper[5014]: I0228 04:57:26.195190 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-79bd4cc8c9-kfmhd"] Feb 28 04:57:28 crc kubenswrapper[5014]: I0228 04:57:28.189994 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e35afd43-f3bc-4344-9771-6481557f1bc5" path="/var/lib/kubelet/pods/e35afd43-f3bc-4344-9771-6481557f1bc5/volumes" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.130061 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj"] Feb 28 04:57:38 crc kubenswrapper[5014]: E0228 04:57:38.131035 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e35afd43-f3bc-4344-9771-6481557f1bc5" containerName="init" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.131051 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="e35afd43-f3bc-4344-9771-6481557f1bc5" containerName="init" Feb 28 04:57:38 crc kubenswrapper[5014]: E0228 04:57:38.131074 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c076180-9ead-4605-84e7-d0d920d19cdb" containerName="init" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.131081 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c076180-9ead-4605-84e7-d0d920d19cdb" containerName="init" Feb 28 04:57:38 crc kubenswrapper[5014]: E0228 04:57:38.131097 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c076180-9ead-4605-84e7-d0d920d19cdb" containerName="dnsmasq-dns" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.131109 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c076180-9ead-4605-84e7-d0d920d19cdb" containerName="dnsmasq-dns" Feb 28 04:57:38 crc kubenswrapper[5014]: E0228 04:57:38.131141 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e35afd43-f3bc-4344-9771-6481557f1bc5" containerName="dnsmasq-dns" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.131149 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="e35afd43-f3bc-4344-9771-6481557f1bc5" containerName="dnsmasq-dns" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.131359 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c076180-9ead-4605-84e7-d0d920d19cdb" containerName="dnsmasq-dns" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.131381 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="e35afd43-f3bc-4344-9771-6481557f1bc5" containerName="dnsmasq-dns" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.132085 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.134163 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6dz6b" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.135029 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.136527 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.143901 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.155169 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj"] Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.253274 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01598708-a115-4ecd-a957-e78d6dbedfcb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj\" (UID: \"01598708-a115-4ecd-a957-e78d6dbedfcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.253379 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01598708-a115-4ecd-a957-e78d6dbedfcb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj\" (UID: \"01598708-a115-4ecd-a957-e78d6dbedfcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.253515 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8l44\" (UniqueName: \"kubernetes.io/projected/01598708-a115-4ecd-a957-e78d6dbedfcb-kube-api-access-b8l44\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj\" (UID: \"01598708-a115-4ecd-a957-e78d6dbedfcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.253971 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01598708-a115-4ecd-a957-e78d6dbedfcb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj\" (UID: \"01598708-a115-4ecd-a957-e78d6dbedfcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.355106 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01598708-a115-4ecd-a957-e78d6dbedfcb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj\" (UID: \"01598708-a115-4ecd-a957-e78d6dbedfcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.355149 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8l44\" (UniqueName: \"kubernetes.io/projected/01598708-a115-4ecd-a957-e78d6dbedfcb-kube-api-access-b8l44\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj\" (UID: \"01598708-a115-4ecd-a957-e78d6dbedfcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.355205 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01598708-a115-4ecd-a957-e78d6dbedfcb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj\" (UID: \"01598708-a115-4ecd-a957-e78d6dbedfcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.355314 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01598708-a115-4ecd-a957-e78d6dbedfcb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj\" (UID: \"01598708-a115-4ecd-a957-e78d6dbedfcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.361394 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01598708-a115-4ecd-a957-e78d6dbedfcb-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj\" (UID: \"01598708-a115-4ecd-a957-e78d6dbedfcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.362901 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01598708-a115-4ecd-a957-e78d6dbedfcb-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj\" (UID: \"01598708-a115-4ecd-a957-e78d6dbedfcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.363602 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01598708-a115-4ecd-a957-e78d6dbedfcb-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj\" (UID: \"01598708-a115-4ecd-a957-e78d6dbedfcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.391700 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8l44\" (UniqueName: \"kubernetes.io/projected/01598708-a115-4ecd-a957-e78d6dbedfcb-kube-api-access-b8l44\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj\" (UID: \"01598708-a115-4ecd-a957-e78d6dbedfcb\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj" Feb 28 04:57:38 crc kubenswrapper[5014]: I0228 04:57:38.481993 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj" Feb 28 04:57:39 crc kubenswrapper[5014]: I0228 04:57:39.135988 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj"] Feb 28 04:57:39 crc kubenswrapper[5014]: W0228 04:57:39.136898 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01598708_a115_4ecd_a957_e78d6dbedfcb.slice/crio-a0aa47a8d876bbfba26919f73b95e688d5d2abe0d55c5c09fec0fa083e312449 WatchSource:0}: Error finding container a0aa47a8d876bbfba26919f73b95e688d5d2abe0d55c5c09fec0fa083e312449: Status 404 returned error can't find the container with id a0aa47a8d876bbfba26919f73b95e688d5d2abe0d55c5c09fec0fa083e312449 Feb 28 04:57:39 crc kubenswrapper[5014]: I0228 04:57:39.272604 5014 generic.go:334] "Generic (PLEG): container finished" podID="3df93ff6-00cf-4c7f-8971-6d1d78795456" containerID="a3daf2dc8dea16c3fe33d5a265e4654fdd20dbcec9ede05e1c641561766b9bd5" exitCode=0 Feb 28 04:57:39 crc kubenswrapper[5014]: I0228 04:57:39.272770 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3df93ff6-00cf-4c7f-8971-6d1d78795456","Type":"ContainerDied","Data":"a3daf2dc8dea16c3fe33d5a265e4654fdd20dbcec9ede05e1c641561766b9bd5"} Feb 28 04:57:39 crc kubenswrapper[5014]: I0228 04:57:39.274518 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj" event={"ID":"01598708-a115-4ecd-a957-e78d6dbedfcb","Type":"ContainerStarted","Data":"a0aa47a8d876bbfba26919f73b95e688d5d2abe0d55c5c09fec0fa083e312449"} Feb 28 04:57:39 crc kubenswrapper[5014]: I0228 04:57:39.276951 5014 generic.go:334] "Generic (PLEG): container finished" podID="7b0d0bd3-ff23-4098-93fb-debf7681cfce" containerID="de23dd85af6120a799d3664e483f407f30d037148d485717c97da3c43a4f67bf" exitCode=0 Feb 28 04:57:39 crc kubenswrapper[5014]: I0228 04:57:39.276979 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7b0d0bd3-ff23-4098-93fb-debf7681cfce","Type":"ContainerDied","Data":"de23dd85af6120a799d3664e483f407f30d037148d485717c97da3c43a4f67bf"} Feb 28 04:57:40 crc kubenswrapper[5014]: I0228 04:57:40.286249 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3df93ff6-00cf-4c7f-8971-6d1d78795456","Type":"ContainerStarted","Data":"8cb1af8e2e1ac158bd29567df1bcc514d0809e62c27bd88bed679965a0b0bcfc"} Feb 28 04:57:40 crc kubenswrapper[5014]: I0228 04:57:40.287568 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:40 crc kubenswrapper[5014]: I0228 04:57:40.290800 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"7b0d0bd3-ff23-4098-93fb-debf7681cfce","Type":"ContainerStarted","Data":"ee51e6a3d99df2029b7bec7258ae6028b5fd967f0339b6b4c9a924b9656ccf16"} Feb 28 04:57:40 crc kubenswrapper[5014]: I0228 04:57:40.291040 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 28 04:57:40 crc kubenswrapper[5014]: I0228 04:57:40.318338 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.318316243 podStartE2EDuration="37.318316243s" podCreationTimestamp="2026-02-28 04:57:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:57:40.311255541 +0000 UTC m=+1448.981381451" watchObservedRunningTime="2026-02-28 04:57:40.318316243 +0000 UTC m=+1448.988442153" Feb 28 04:57:40 crc kubenswrapper[5014]: I0228 04:57:40.349262 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.349239608 podStartE2EDuration="38.349239608s" podCreationTimestamp="2026-02-28 04:57:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 04:57:40.332620379 +0000 UTC m=+1449.002746309" watchObservedRunningTime="2026-02-28 04:57:40.349239608 +0000 UTC m=+1449.019365518" Feb 28 04:57:45 crc kubenswrapper[5014]: I0228 04:57:45.706581 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 04:57:45 crc kubenswrapper[5014]: I0228 04:57:45.707273 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 04:57:45 crc kubenswrapper[5014]: I0228 04:57:45.707340 5014 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 04:57:45 crc kubenswrapper[5014]: I0228 04:57:45.708329 5014 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a1900058ed5d5055efcaa8b7a5a928b3456052935d481ae9dedaea0c3e448c54"} pod="openshift-machine-config-operator/machine-config-daemon-cct62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 04:57:45 crc kubenswrapper[5014]: I0228 04:57:45.708424 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" containerID="cri-o://a1900058ed5d5055efcaa8b7a5a928b3456052935d481ae9dedaea0c3e448c54" gracePeriod=600 Feb 28 04:57:46 crc kubenswrapper[5014]: I0228 04:57:46.374168 5014 generic.go:334] "Generic (PLEG): container finished" podID="6aad0009-d904-48f8-8e30-82205907ece1" containerID="a1900058ed5d5055efcaa8b7a5a928b3456052935d481ae9dedaea0c3e448c54" exitCode=0 Feb 28 04:57:46 crc kubenswrapper[5014]: I0228 04:57:46.374284 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerDied","Data":"a1900058ed5d5055efcaa8b7a5a928b3456052935d481ae9dedaea0c3e448c54"} Feb 28 04:57:46 crc kubenswrapper[5014]: I0228 04:57:46.374529 5014 scope.go:117] "RemoveContainer" containerID="9fe0724568f1359a83a127eb5109a6ee8f87dacb3ce893d1b36328a0a6724e45" Feb 28 04:57:48 crc kubenswrapper[5014]: I0228 04:57:48.396106 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerStarted","Data":"831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0"} Feb 28 04:57:48 crc kubenswrapper[5014]: I0228 04:57:48.397857 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj" event={"ID":"01598708-a115-4ecd-a957-e78d6dbedfcb","Type":"ContainerStarted","Data":"18d18facccaf6ca76921daaec6723ce8f2455a2ec337f37de9d843d79105dd62"} Feb 28 04:57:48 crc kubenswrapper[5014]: I0228 04:57:48.459561 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj" podStartSLOduration=1.793608914 podStartE2EDuration="10.459539912s" podCreationTimestamp="2026-02-28 04:57:38 +0000 UTC" firstStartedPulling="2026-02-28 04:57:39.139868897 +0000 UTC m=+1447.809994817" lastFinishedPulling="2026-02-28 04:57:47.805799865 +0000 UTC m=+1456.475925815" observedRunningTime="2026-02-28 04:57:48.452386628 +0000 UTC m=+1457.122512538" watchObservedRunningTime="2026-02-28 04:57:48.459539912 +0000 UTC m=+1457.129665842" Feb 28 04:57:53 crc kubenswrapper[5014]: I0228 04:57:53.351087 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 28 04:57:53 crc kubenswrapper[5014]: I0228 04:57:53.525716 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 28 04:57:54 crc kubenswrapper[5014]: I0228 04:57:54.091151 5014 scope.go:117] "RemoveContainer" containerID="2fc04a907aaf3205f61dd158bb0ad1daf10dad80f5bde4a623f3849c1ab06674" Feb 28 04:57:59 crc kubenswrapper[5014]: I0228 04:57:59.534609 5014 generic.go:334] "Generic (PLEG): container finished" podID="01598708-a115-4ecd-a957-e78d6dbedfcb" containerID="18d18facccaf6ca76921daaec6723ce8f2455a2ec337f37de9d843d79105dd62" exitCode=0 Feb 28 04:57:59 crc kubenswrapper[5014]: I0228 04:57:59.534742 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj" event={"ID":"01598708-a115-4ecd-a957-e78d6dbedfcb","Type":"ContainerDied","Data":"18d18facccaf6ca76921daaec6723ce8f2455a2ec337f37de9d843d79105dd62"} Feb 28 04:58:00 crc kubenswrapper[5014]: I0228 04:58:00.184302 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537578-fvxfg"] Feb 28 04:58:00 crc kubenswrapper[5014]: I0228 04:58:00.185966 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537578-fvxfg" Feb 28 04:58:00 crc kubenswrapper[5014]: I0228 04:58:00.188130 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 04:58:00 crc kubenswrapper[5014]: I0228 04:58:00.189026 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 04:58:00 crc kubenswrapper[5014]: I0228 04:58:00.190226 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537578-fvxfg"] Feb 28 04:58:00 crc kubenswrapper[5014]: I0228 04:58:00.190513 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 04:58:00 crc kubenswrapper[5014]: I0228 04:58:00.216579 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9rwp\" (UniqueName: \"kubernetes.io/projected/0ab3025f-a356-4183-9663-8a3c8290c265-kube-api-access-r9rwp\") pod \"auto-csr-approver-29537578-fvxfg\" (UID: \"0ab3025f-a356-4183-9663-8a3c8290c265\") " pod="openshift-infra/auto-csr-approver-29537578-fvxfg" Feb 28 04:58:00 crc kubenswrapper[5014]: I0228 04:58:00.317964 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9rwp\" (UniqueName: \"kubernetes.io/projected/0ab3025f-a356-4183-9663-8a3c8290c265-kube-api-access-r9rwp\") pod \"auto-csr-approver-29537578-fvxfg\" (UID: \"0ab3025f-a356-4183-9663-8a3c8290c265\") " pod="openshift-infra/auto-csr-approver-29537578-fvxfg" Feb 28 04:58:00 crc kubenswrapper[5014]: I0228 04:58:00.340763 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9rwp\" (UniqueName: \"kubernetes.io/projected/0ab3025f-a356-4183-9663-8a3c8290c265-kube-api-access-r9rwp\") pod \"auto-csr-approver-29537578-fvxfg\" (UID: \"0ab3025f-a356-4183-9663-8a3c8290c265\") " pod="openshift-infra/auto-csr-approver-29537578-fvxfg" Feb 28 04:58:00 crc kubenswrapper[5014]: I0228 04:58:00.508840 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537578-fvxfg" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.049931 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537578-fvxfg"] Feb 28 04:58:01 crc kubenswrapper[5014]: W0228 04:58:01.053099 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ab3025f_a356_4183_9663_8a3c8290c265.slice/crio-3daec54a2daa7afe40e32e047f34fb04a94e64da8ea6ea22128539e13b2d6fe4 WatchSource:0}: Error finding container 3daec54a2daa7afe40e32e047f34fb04a94e64da8ea6ea22128539e13b2d6fe4: Status 404 returned error can't find the container with id 3daec54a2daa7afe40e32e047f34fb04a94e64da8ea6ea22128539e13b2d6fe4 Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.059305 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.132417 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01598708-a115-4ecd-a957-e78d6dbedfcb-ssh-key-openstack-edpm-ipam\") pod \"01598708-a115-4ecd-a957-e78d6dbedfcb\" (UID: \"01598708-a115-4ecd-a957-e78d6dbedfcb\") " Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.132472 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01598708-a115-4ecd-a957-e78d6dbedfcb-inventory\") pod \"01598708-a115-4ecd-a957-e78d6dbedfcb\" (UID: \"01598708-a115-4ecd-a957-e78d6dbedfcb\") " Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.132543 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8l44\" (UniqueName: \"kubernetes.io/projected/01598708-a115-4ecd-a957-e78d6dbedfcb-kube-api-access-b8l44\") pod \"01598708-a115-4ecd-a957-e78d6dbedfcb\" (UID: \"01598708-a115-4ecd-a957-e78d6dbedfcb\") " Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.132762 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01598708-a115-4ecd-a957-e78d6dbedfcb-repo-setup-combined-ca-bundle\") pod \"01598708-a115-4ecd-a957-e78d6dbedfcb\" (UID: \"01598708-a115-4ecd-a957-e78d6dbedfcb\") " Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.138638 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01598708-a115-4ecd-a957-e78d6dbedfcb-kube-api-access-b8l44" (OuterVolumeSpecName: "kube-api-access-b8l44") pod "01598708-a115-4ecd-a957-e78d6dbedfcb" (UID: "01598708-a115-4ecd-a957-e78d6dbedfcb"). InnerVolumeSpecName "kube-api-access-b8l44". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.139449 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01598708-a115-4ecd-a957-e78d6dbedfcb-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "01598708-a115-4ecd-a957-e78d6dbedfcb" (UID: "01598708-a115-4ecd-a957-e78d6dbedfcb"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.159064 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01598708-a115-4ecd-a957-e78d6dbedfcb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "01598708-a115-4ecd-a957-e78d6dbedfcb" (UID: "01598708-a115-4ecd-a957-e78d6dbedfcb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.169026 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01598708-a115-4ecd-a957-e78d6dbedfcb-inventory" (OuterVolumeSpecName: "inventory") pod "01598708-a115-4ecd-a957-e78d6dbedfcb" (UID: "01598708-a115-4ecd-a957-e78d6dbedfcb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.236093 5014 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/01598708-a115-4ecd-a957-e78d6dbedfcb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.236135 5014 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/01598708-a115-4ecd-a957-e78d6dbedfcb-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.236148 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8l44\" (UniqueName: \"kubernetes.io/projected/01598708-a115-4ecd-a957-e78d6dbedfcb-kube-api-access-b8l44\") on node \"crc\" DevicePath \"\"" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.236160 5014 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/01598708-a115-4ecd-a957-e78d6dbedfcb-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.563762 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.564431 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj" event={"ID":"01598708-a115-4ecd-a957-e78d6dbedfcb","Type":"ContainerDied","Data":"a0aa47a8d876bbfba26919f73b95e688d5d2abe0d55c5c09fec0fa083e312449"} Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.564766 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0aa47a8d876bbfba26919f73b95e688d5d2abe0d55c5c09fec0fa083e312449" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.566096 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537578-fvxfg" event={"ID":"0ab3025f-a356-4183-9663-8a3c8290c265","Type":"ContainerStarted","Data":"3daec54a2daa7afe40e32e047f34fb04a94e64da8ea6ea22128539e13b2d6fe4"} Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.664547 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-86b55"] Feb 28 04:58:01 crc kubenswrapper[5014]: E0228 04:58:01.665274 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01598708-a115-4ecd-a957-e78d6dbedfcb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.665356 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="01598708-a115-4ecd-a957-e78d6dbedfcb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.665651 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="01598708-a115-4ecd-a957-e78d6dbedfcb" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.666405 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-86b55" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.668731 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.668957 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.668990 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6dz6b" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.669068 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.681387 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-86b55"] Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.746262 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64b99a72-222b-4ead-b368-fe335c674da5-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-86b55\" (UID: \"64b99a72-222b-4ead-b368-fe335c674da5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-86b55" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.746439 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64b99a72-222b-4ead-b368-fe335c674da5-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-86b55\" (UID: \"64b99a72-222b-4ead-b368-fe335c674da5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-86b55" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.746717 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsfmq\" (UniqueName: \"kubernetes.io/projected/64b99a72-222b-4ead-b368-fe335c674da5-kube-api-access-zsfmq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-86b55\" (UID: \"64b99a72-222b-4ead-b368-fe335c674da5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-86b55" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.848507 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64b99a72-222b-4ead-b368-fe335c674da5-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-86b55\" (UID: \"64b99a72-222b-4ead-b368-fe335c674da5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-86b55" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.848595 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64b99a72-222b-4ead-b368-fe335c674da5-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-86b55\" (UID: \"64b99a72-222b-4ead-b368-fe335c674da5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-86b55" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.848853 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsfmq\" (UniqueName: \"kubernetes.io/projected/64b99a72-222b-4ead-b368-fe335c674da5-kube-api-access-zsfmq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-86b55\" (UID: \"64b99a72-222b-4ead-b368-fe335c674da5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-86b55" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.854291 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64b99a72-222b-4ead-b368-fe335c674da5-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-86b55\" (UID: \"64b99a72-222b-4ead-b368-fe335c674da5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-86b55" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.854532 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64b99a72-222b-4ead-b368-fe335c674da5-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-86b55\" (UID: \"64b99a72-222b-4ead-b368-fe335c674da5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-86b55" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.883077 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsfmq\" (UniqueName: \"kubernetes.io/projected/64b99a72-222b-4ead-b368-fe335c674da5-kube-api-access-zsfmq\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-86b55\" (UID: \"64b99a72-222b-4ead-b368-fe335c674da5\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-86b55" Feb 28 04:58:01 crc kubenswrapper[5014]: I0228 04:58:01.987144 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-86b55" Feb 28 04:58:02 crc kubenswrapper[5014]: I0228 04:58:02.548542 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-86b55"] Feb 28 04:58:02 crc kubenswrapper[5014]: I0228 04:58:02.576573 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-86b55" event={"ID":"64b99a72-222b-4ead-b368-fe335c674da5","Type":"ContainerStarted","Data":"a660b66af3d6228090c14f13c6f448d18f0ecf17e69858a4e86316c8d7443976"} Feb 28 04:58:02 crc kubenswrapper[5014]: I0228 04:58:02.578385 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537578-fvxfg" event={"ID":"0ab3025f-a356-4183-9663-8a3c8290c265","Type":"ContainerStarted","Data":"8016fc329a3bc5f93c6c5cbd601e686f8dc401cc77617429b7e4fd657f12ffc9"} Feb 28 04:58:02 crc kubenswrapper[5014]: I0228 04:58:02.600504 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537578-fvxfg" podStartSLOduration=1.5248687950000002 podStartE2EDuration="2.600480121s" podCreationTimestamp="2026-02-28 04:58:00 +0000 UTC" firstStartedPulling="2026-02-28 04:58:01.056029155 +0000 UTC m=+1469.726155065" lastFinishedPulling="2026-02-28 04:58:02.131640481 +0000 UTC m=+1470.801766391" observedRunningTime="2026-02-28 04:58:02.590024958 +0000 UTC m=+1471.260150868" watchObservedRunningTime="2026-02-28 04:58:02.600480121 +0000 UTC m=+1471.270606031" Feb 28 04:58:03 crc kubenswrapper[5014]: I0228 04:58:03.593906 5014 generic.go:334] "Generic (PLEG): container finished" podID="0ab3025f-a356-4183-9663-8a3c8290c265" containerID="8016fc329a3bc5f93c6c5cbd601e686f8dc401cc77617429b7e4fd657f12ffc9" exitCode=0 Feb 28 04:58:03 crc kubenswrapper[5014]: I0228 04:58:03.594000 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537578-fvxfg" event={"ID":"0ab3025f-a356-4183-9663-8a3c8290c265","Type":"ContainerDied","Data":"8016fc329a3bc5f93c6c5cbd601e686f8dc401cc77617429b7e4fd657f12ffc9"} Feb 28 04:58:03 crc kubenswrapper[5014]: I0228 04:58:03.597377 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-86b55" event={"ID":"64b99a72-222b-4ead-b368-fe335c674da5","Type":"ContainerStarted","Data":"da9bed2d2073e7d66f7db1e4e0d742bcf89383617756bb0db6da85515e0f21e5"} Feb 28 04:58:03 crc kubenswrapper[5014]: I0228 04:58:03.653969 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-86b55" podStartSLOduration=2.228418981 podStartE2EDuration="2.653942649s" podCreationTimestamp="2026-02-28 04:58:01 +0000 UTC" firstStartedPulling="2026-02-28 04:58:02.561073866 +0000 UTC m=+1471.231199776" lastFinishedPulling="2026-02-28 04:58:02.986597524 +0000 UTC m=+1471.656723444" observedRunningTime="2026-02-28 04:58:03.637311839 +0000 UTC m=+1472.307437779" watchObservedRunningTime="2026-02-28 04:58:03.653942649 +0000 UTC m=+1472.324068589" Feb 28 04:58:05 crc kubenswrapper[5014]: I0228 04:58:05.046758 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537578-fvxfg" Feb 28 04:58:05 crc kubenswrapper[5014]: I0228 04:58:05.109254 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9rwp\" (UniqueName: \"kubernetes.io/projected/0ab3025f-a356-4183-9663-8a3c8290c265-kube-api-access-r9rwp\") pod \"0ab3025f-a356-4183-9663-8a3c8290c265\" (UID: \"0ab3025f-a356-4183-9663-8a3c8290c265\") " Feb 28 04:58:05 crc kubenswrapper[5014]: I0228 04:58:05.123905 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab3025f-a356-4183-9663-8a3c8290c265-kube-api-access-r9rwp" (OuterVolumeSpecName: "kube-api-access-r9rwp") pod "0ab3025f-a356-4183-9663-8a3c8290c265" (UID: "0ab3025f-a356-4183-9663-8a3c8290c265"). InnerVolumeSpecName "kube-api-access-r9rwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:58:05 crc kubenswrapper[5014]: I0228 04:58:05.211752 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9rwp\" (UniqueName: \"kubernetes.io/projected/0ab3025f-a356-4183-9663-8a3c8290c265-kube-api-access-r9rwp\") on node \"crc\" DevicePath \"\"" Feb 28 04:58:05 crc kubenswrapper[5014]: I0228 04:58:05.283234 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537572-krzx2"] Feb 28 04:58:05 crc kubenswrapper[5014]: I0228 04:58:05.299921 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537572-krzx2"] Feb 28 04:58:05 crc kubenswrapper[5014]: I0228 04:58:05.623482 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537578-fvxfg" event={"ID":"0ab3025f-a356-4183-9663-8a3c8290c265","Type":"ContainerDied","Data":"3daec54a2daa7afe40e32e047f34fb04a94e64da8ea6ea22128539e13b2d6fe4"} Feb 28 04:58:05 crc kubenswrapper[5014]: I0228 04:58:05.623518 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3daec54a2daa7afe40e32e047f34fb04a94e64da8ea6ea22128539e13b2d6fe4" Feb 28 04:58:05 crc kubenswrapper[5014]: I0228 04:58:05.623538 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537578-fvxfg" Feb 28 04:58:05 crc kubenswrapper[5014]: E0228 04:58:05.916259 5014 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ab3025f_a356_4183_9663_8a3c8290c265.slice\": RecentStats: unable to find data in memory cache]" Feb 28 04:58:06 crc kubenswrapper[5014]: I0228 04:58:06.186234 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46285c3a-9d55-4bc5-8b40-8413ca3e8a4e" path="/var/lib/kubelet/pods/46285c3a-9d55-4bc5-8b40-8413ca3e8a4e/volumes" Feb 28 04:58:06 crc kubenswrapper[5014]: I0228 04:58:06.635113 5014 generic.go:334] "Generic (PLEG): container finished" podID="64b99a72-222b-4ead-b368-fe335c674da5" containerID="da9bed2d2073e7d66f7db1e4e0d742bcf89383617756bb0db6da85515e0f21e5" exitCode=0 Feb 28 04:58:06 crc kubenswrapper[5014]: I0228 04:58:06.635168 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-86b55" event={"ID":"64b99a72-222b-4ead-b368-fe335c674da5","Type":"ContainerDied","Data":"da9bed2d2073e7d66f7db1e4e0d742bcf89383617756bb0db6da85515e0f21e5"} Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.078137 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-86b55" Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.099889 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64b99a72-222b-4ead-b368-fe335c674da5-ssh-key-openstack-edpm-ipam\") pod \"64b99a72-222b-4ead-b368-fe335c674da5\" (UID: \"64b99a72-222b-4ead-b368-fe335c674da5\") " Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.100053 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsfmq\" (UniqueName: \"kubernetes.io/projected/64b99a72-222b-4ead-b368-fe335c674da5-kube-api-access-zsfmq\") pod \"64b99a72-222b-4ead-b368-fe335c674da5\" (UID: \"64b99a72-222b-4ead-b368-fe335c674da5\") " Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.100195 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64b99a72-222b-4ead-b368-fe335c674da5-inventory\") pod \"64b99a72-222b-4ead-b368-fe335c674da5\" (UID: \"64b99a72-222b-4ead-b368-fe335c674da5\") " Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.110223 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64b99a72-222b-4ead-b368-fe335c674da5-kube-api-access-zsfmq" (OuterVolumeSpecName: "kube-api-access-zsfmq") pod "64b99a72-222b-4ead-b368-fe335c674da5" (UID: "64b99a72-222b-4ead-b368-fe335c674da5"). InnerVolumeSpecName "kube-api-access-zsfmq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.135178 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64b99a72-222b-4ead-b368-fe335c674da5-inventory" (OuterVolumeSpecName: "inventory") pod "64b99a72-222b-4ead-b368-fe335c674da5" (UID: "64b99a72-222b-4ead-b368-fe335c674da5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.145641 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64b99a72-222b-4ead-b368-fe335c674da5-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "64b99a72-222b-4ead-b368-fe335c674da5" (UID: "64b99a72-222b-4ead-b368-fe335c674da5"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.202646 5014 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/64b99a72-222b-4ead-b368-fe335c674da5-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.202695 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsfmq\" (UniqueName: \"kubernetes.io/projected/64b99a72-222b-4ead-b368-fe335c674da5-kube-api-access-zsfmq\") on node \"crc\" DevicePath \"\"" Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.202715 5014 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64b99a72-222b-4ead-b368-fe335c674da5-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.730255 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-86b55" event={"ID":"64b99a72-222b-4ead-b368-fe335c674da5","Type":"ContainerDied","Data":"a660b66af3d6228090c14f13c6f448d18f0ecf17e69858a4e86316c8d7443976"} Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.730333 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a660b66af3d6228090c14f13c6f448d18f0ecf17e69858a4e86316c8d7443976" Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.730363 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-86b55" Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.796460 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv"] Feb 28 04:58:08 crc kubenswrapper[5014]: E0228 04:58:08.797330 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab3025f-a356-4183-9663-8a3c8290c265" containerName="oc" Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.797374 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab3025f-a356-4183-9663-8a3c8290c265" containerName="oc" Feb 28 04:58:08 crc kubenswrapper[5014]: E0228 04:58:08.797400 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64b99a72-222b-4ead-b368-fe335c674da5" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.797420 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="64b99a72-222b-4ead-b368-fe335c674da5" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.797938 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="64b99a72-222b-4ead-b368-fe335c674da5" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.798008 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab3025f-a356-4183-9663-8a3c8290c265" containerName="oc" Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.799309 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv" Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.802061 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.802219 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.802318 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6dz6b" Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.802830 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.809106 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv"] Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.922330 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71fc0e19-253e-4cae-b6ee-7efc24398ffa-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv\" (UID: \"71fc0e19-253e-4cae-b6ee-7efc24398ffa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv" Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.922505 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pfx4\" (UniqueName: \"kubernetes.io/projected/71fc0e19-253e-4cae-b6ee-7efc24398ffa-kube-api-access-9pfx4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv\" (UID: \"71fc0e19-253e-4cae-b6ee-7efc24398ffa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv" Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.922551 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71fc0e19-253e-4cae-b6ee-7efc24398ffa-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv\" (UID: \"71fc0e19-253e-4cae-b6ee-7efc24398ffa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv" Feb 28 04:58:08 crc kubenswrapper[5014]: I0228 04:58:08.922934 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71fc0e19-253e-4cae-b6ee-7efc24398ffa-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv\" (UID: \"71fc0e19-253e-4cae-b6ee-7efc24398ffa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv" Feb 28 04:58:09 crc kubenswrapper[5014]: I0228 04:58:09.025334 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71fc0e19-253e-4cae-b6ee-7efc24398ffa-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv\" (UID: \"71fc0e19-253e-4cae-b6ee-7efc24398ffa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv" Feb 28 04:58:09 crc kubenswrapper[5014]: I0228 04:58:09.025526 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71fc0e19-253e-4cae-b6ee-7efc24398ffa-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv\" (UID: \"71fc0e19-253e-4cae-b6ee-7efc24398ffa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv" Feb 28 04:58:09 crc kubenswrapper[5014]: I0228 04:58:09.025664 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pfx4\" (UniqueName: \"kubernetes.io/projected/71fc0e19-253e-4cae-b6ee-7efc24398ffa-kube-api-access-9pfx4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv\" (UID: \"71fc0e19-253e-4cae-b6ee-7efc24398ffa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv" Feb 28 04:58:09 crc kubenswrapper[5014]: I0228 04:58:09.025701 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71fc0e19-253e-4cae-b6ee-7efc24398ffa-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv\" (UID: \"71fc0e19-253e-4cae-b6ee-7efc24398ffa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv" Feb 28 04:58:09 crc kubenswrapper[5014]: I0228 04:58:09.033453 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71fc0e19-253e-4cae-b6ee-7efc24398ffa-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv\" (UID: \"71fc0e19-253e-4cae-b6ee-7efc24398ffa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv" Feb 28 04:58:09 crc kubenswrapper[5014]: I0228 04:58:09.048659 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71fc0e19-253e-4cae-b6ee-7efc24398ffa-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv\" (UID: \"71fc0e19-253e-4cae-b6ee-7efc24398ffa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv" Feb 28 04:58:09 crc kubenswrapper[5014]: I0228 04:58:09.050578 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71fc0e19-253e-4cae-b6ee-7efc24398ffa-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv\" (UID: \"71fc0e19-253e-4cae-b6ee-7efc24398ffa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv" Feb 28 04:58:09 crc kubenswrapper[5014]: I0228 04:58:09.057733 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pfx4\" (UniqueName: \"kubernetes.io/projected/71fc0e19-253e-4cae-b6ee-7efc24398ffa-kube-api-access-9pfx4\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv\" (UID: \"71fc0e19-253e-4cae-b6ee-7efc24398ffa\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv" Feb 28 04:58:09 crc kubenswrapper[5014]: I0228 04:58:09.135706 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv" Feb 28 04:58:09 crc kubenswrapper[5014]: I0228 04:58:09.695340 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv"] Feb 28 04:58:09 crc kubenswrapper[5014]: I0228 04:58:09.739996 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv" event={"ID":"71fc0e19-253e-4cae-b6ee-7efc24398ffa","Type":"ContainerStarted","Data":"54a7affbb8156ac7a89c29062a969b0126c68f1ac5b084b299660c07a8298224"} Feb 28 04:58:10 crc kubenswrapper[5014]: I0228 04:58:10.751759 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv" event={"ID":"71fc0e19-253e-4cae-b6ee-7efc24398ffa","Type":"ContainerStarted","Data":"bc7bf45765239c6b931912259c1d44a292804886d01d8f87a68cf158c121b8c5"} Feb 28 04:58:10 crc kubenswrapper[5014]: I0228 04:58:10.767482 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv" podStartSLOduration=2.361667761 podStartE2EDuration="2.767467316s" podCreationTimestamp="2026-02-28 04:58:08 +0000 UTC" firstStartedPulling="2026-02-28 04:58:09.707701299 +0000 UTC m=+1478.377827249" lastFinishedPulling="2026-02-28 04:58:10.113500884 +0000 UTC m=+1478.783626804" observedRunningTime="2026-02-28 04:58:10.766763748 +0000 UTC m=+1479.436889658" watchObservedRunningTime="2026-02-28 04:58:10.767467316 +0000 UTC m=+1479.437593226" Feb 28 04:58:54 crc kubenswrapper[5014]: I0228 04:58:54.202691 5014 scope.go:117] "RemoveContainer" containerID="26a6cb788829d03c940e48557c3c66439f547e7508a02ace91020b1052c56647" Feb 28 04:58:54 crc kubenswrapper[5014]: I0228 04:58:54.239728 5014 scope.go:117] "RemoveContainer" containerID="8a9c6a52151a3072d41f884b74b1f1ba2df8bfe6a0f566841948e5b37af94750" Feb 28 04:58:54 crc kubenswrapper[5014]: I0228 04:58:54.311319 5014 scope.go:117] "RemoveContainer" containerID="5397b2bb549aeb4a32e16958de7d16547652ece61311bcf11a6a1f357ea86a32" Feb 28 04:58:54 crc kubenswrapper[5014]: I0228 04:58:54.372178 5014 scope.go:117] "RemoveContainer" containerID="027f85da454c64f840a013237eb9aba105367f15604330158a90689b04503b70" Feb 28 04:58:54 crc kubenswrapper[5014]: I0228 04:58:54.405480 5014 scope.go:117] "RemoveContainer" containerID="29a94a8a21103b36ec5a9c08e355416cad5772f0c62b047c91ce146979b30c28" Feb 28 04:58:54 crc kubenswrapper[5014]: I0228 04:58:54.445983 5014 scope.go:117] "RemoveContainer" containerID="5bdf8ea7a06cf7abbed98ff2393b40a3dfc8611d3494f2dd07d07b7560fb5a46" Feb 28 04:58:54 crc kubenswrapper[5014]: I0228 04:58:54.469426 5014 scope.go:117] "RemoveContainer" containerID="6aa052f4b5e7c5a3ad8de9ccf2eb6301e3f49de02844097a1f59be13fb678de0" Feb 28 04:59:54 crc kubenswrapper[5014]: I0228 04:59:54.623048 5014 scope.go:117] "RemoveContainer" containerID="1dfc961674ce32798797b2b57b0df42b1f6a3fdec1ff279e9a2b12082e3ccd9d" Feb 28 04:59:54 crc kubenswrapper[5014]: I0228 04:59:54.672823 5014 scope.go:117] "RemoveContainer" containerID="04122b4e4a5ff26b82494562223746343a136fbfca497e59f68bb121aebe9c97" Feb 28 04:59:54 crc kubenswrapper[5014]: I0228 04:59:54.709159 5014 scope.go:117] "RemoveContainer" containerID="55a544d313216d8183984f8ef62ce60d0445fdc4c04a104b1b368cea381a6fba" Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.149480 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537580-fxfr8"] Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.151671 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537580-fxfr8" Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.155025 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.155106 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.156000 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.159901 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537580-zsjhz"] Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.161277 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537580-zsjhz" Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.162880 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.163623 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.182475 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537580-fxfr8"] Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.190841 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zd6pt\" (UniqueName: \"kubernetes.io/projected/88defcda-3a2d-400f-8906-0c8c958c8f31-kube-api-access-zd6pt\") pod \"auto-csr-approver-29537580-fxfr8\" (UID: \"88defcda-3a2d-400f-8906-0c8c958c8f31\") " pod="openshift-infra/auto-csr-approver-29537580-fxfr8" Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.191031 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b41ac68c-5535-42b1-81e3-c802c005f146-secret-volume\") pod \"collect-profiles-29537580-zsjhz\" (UID: \"b41ac68c-5535-42b1-81e3-c802c005f146\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537580-zsjhz" Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.191204 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thrww\" (UniqueName: \"kubernetes.io/projected/b41ac68c-5535-42b1-81e3-c802c005f146-kube-api-access-thrww\") pod \"collect-profiles-29537580-zsjhz\" (UID: \"b41ac68c-5535-42b1-81e3-c802c005f146\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537580-zsjhz" Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.191328 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b41ac68c-5535-42b1-81e3-c802c005f146-config-volume\") pod \"collect-profiles-29537580-zsjhz\" (UID: \"b41ac68c-5535-42b1-81e3-c802c005f146\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537580-zsjhz" Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.198850 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537580-zsjhz"] Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.293286 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-thrww\" (UniqueName: \"kubernetes.io/projected/b41ac68c-5535-42b1-81e3-c802c005f146-kube-api-access-thrww\") pod \"collect-profiles-29537580-zsjhz\" (UID: \"b41ac68c-5535-42b1-81e3-c802c005f146\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537580-zsjhz" Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.293442 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b41ac68c-5535-42b1-81e3-c802c005f146-config-volume\") pod \"collect-profiles-29537580-zsjhz\" (UID: \"b41ac68c-5535-42b1-81e3-c802c005f146\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537580-zsjhz" Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.293505 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zd6pt\" (UniqueName: \"kubernetes.io/projected/88defcda-3a2d-400f-8906-0c8c958c8f31-kube-api-access-zd6pt\") pod \"auto-csr-approver-29537580-fxfr8\" (UID: \"88defcda-3a2d-400f-8906-0c8c958c8f31\") " pod="openshift-infra/auto-csr-approver-29537580-fxfr8" Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.293581 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b41ac68c-5535-42b1-81e3-c802c005f146-secret-volume\") pod \"collect-profiles-29537580-zsjhz\" (UID: \"b41ac68c-5535-42b1-81e3-c802c005f146\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537580-zsjhz" Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.294545 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b41ac68c-5535-42b1-81e3-c802c005f146-config-volume\") pod \"collect-profiles-29537580-zsjhz\" (UID: \"b41ac68c-5535-42b1-81e3-c802c005f146\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537580-zsjhz" Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.305876 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b41ac68c-5535-42b1-81e3-c802c005f146-secret-volume\") pod \"collect-profiles-29537580-zsjhz\" (UID: \"b41ac68c-5535-42b1-81e3-c802c005f146\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537580-zsjhz" Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.310958 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-thrww\" (UniqueName: \"kubernetes.io/projected/b41ac68c-5535-42b1-81e3-c802c005f146-kube-api-access-thrww\") pod \"collect-profiles-29537580-zsjhz\" (UID: \"b41ac68c-5535-42b1-81e3-c802c005f146\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537580-zsjhz" Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.316794 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zd6pt\" (UniqueName: \"kubernetes.io/projected/88defcda-3a2d-400f-8906-0c8c958c8f31-kube-api-access-zd6pt\") pod \"auto-csr-approver-29537580-fxfr8\" (UID: \"88defcda-3a2d-400f-8906-0c8c958c8f31\") " pod="openshift-infra/auto-csr-approver-29537580-fxfr8" Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.481658 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537580-fxfr8" Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.493295 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537580-zsjhz" Feb 28 05:00:00 crc kubenswrapper[5014]: I0228 05:00:00.992140 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537580-fxfr8"] Feb 28 05:00:00 crc kubenswrapper[5014]: W0228 05:00:00.999446 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88defcda_3a2d_400f_8906_0c8c958c8f31.slice/crio-68ddac82fb3ea3551c61e6ebfefa766eaffb2c2e384490dc873be424d79f5b69 WatchSource:0}: Error finding container 68ddac82fb3ea3551c61e6ebfefa766eaffb2c2e384490dc873be424d79f5b69: Status 404 returned error can't find the container with id 68ddac82fb3ea3551c61e6ebfefa766eaffb2c2e384490dc873be424d79f5b69 Feb 28 05:00:01 crc kubenswrapper[5014]: I0228 05:00:01.002789 5014 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 05:00:01 crc kubenswrapper[5014]: W0228 05:00:01.150780 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb41ac68c_5535_42b1_81e3_c802c005f146.slice/crio-5aa0c5553eba5c58d6de2836d448f618e2cd1d92aaf63b19a7b1f511d9dea603 WatchSource:0}: Error finding container 5aa0c5553eba5c58d6de2836d448f618e2cd1d92aaf63b19a7b1f511d9dea603: Status 404 returned error can't find the container with id 5aa0c5553eba5c58d6de2836d448f618e2cd1d92aaf63b19a7b1f511d9dea603 Feb 28 05:00:01 crc kubenswrapper[5014]: I0228 05:00:01.153473 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537580-zsjhz"] Feb 28 05:00:01 crc kubenswrapper[5014]: I0228 05:00:01.989159 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537580-fxfr8" event={"ID":"88defcda-3a2d-400f-8906-0c8c958c8f31","Type":"ContainerStarted","Data":"68ddac82fb3ea3551c61e6ebfefa766eaffb2c2e384490dc873be424d79f5b69"} Feb 28 05:00:01 crc kubenswrapper[5014]: I0228 05:00:01.992859 5014 generic.go:334] "Generic (PLEG): container finished" podID="b41ac68c-5535-42b1-81e3-c802c005f146" containerID="1478f02cc29f04a1c6095a0fe53e641c12d0c24c81c98c69363e4be9b412517f" exitCode=0 Feb 28 05:00:01 crc kubenswrapper[5014]: I0228 05:00:01.992912 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537580-zsjhz" event={"ID":"b41ac68c-5535-42b1-81e3-c802c005f146","Type":"ContainerDied","Data":"1478f02cc29f04a1c6095a0fe53e641c12d0c24c81c98c69363e4be9b412517f"} Feb 28 05:00:01 crc kubenswrapper[5014]: I0228 05:00:01.992941 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537580-zsjhz" event={"ID":"b41ac68c-5535-42b1-81e3-c802c005f146","Type":"ContainerStarted","Data":"5aa0c5553eba5c58d6de2836d448f618e2cd1d92aaf63b19a7b1f511d9dea603"} Feb 28 05:00:03 crc kubenswrapper[5014]: I0228 05:00:03.310026 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-s27qd"] Feb 28 05:00:03 crc kubenswrapper[5014]: I0228 05:00:03.312752 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s27qd" Feb 28 05:00:03 crc kubenswrapper[5014]: I0228 05:00:03.326953 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s27qd"] Feb 28 05:00:03 crc kubenswrapper[5014]: I0228 05:00:03.377521 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8d4g\" (UniqueName: \"kubernetes.io/projected/4809fe1e-9b8b-4fcb-9db3-2fa911f71be8-kube-api-access-j8d4g\") pod \"redhat-marketplace-s27qd\" (UID: \"4809fe1e-9b8b-4fcb-9db3-2fa911f71be8\") " pod="openshift-marketplace/redhat-marketplace-s27qd" Feb 28 05:00:03 crc kubenswrapper[5014]: I0228 05:00:03.377556 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4809fe1e-9b8b-4fcb-9db3-2fa911f71be8-catalog-content\") pod \"redhat-marketplace-s27qd\" (UID: \"4809fe1e-9b8b-4fcb-9db3-2fa911f71be8\") " pod="openshift-marketplace/redhat-marketplace-s27qd" Feb 28 05:00:03 crc kubenswrapper[5014]: I0228 05:00:03.377662 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4809fe1e-9b8b-4fcb-9db3-2fa911f71be8-utilities\") pod \"redhat-marketplace-s27qd\" (UID: \"4809fe1e-9b8b-4fcb-9db3-2fa911f71be8\") " pod="openshift-marketplace/redhat-marketplace-s27qd" Feb 28 05:00:03 crc kubenswrapper[5014]: I0228 05:00:03.380175 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537580-zsjhz" Feb 28 05:00:03 crc kubenswrapper[5014]: I0228 05:00:03.480303 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thrww\" (UniqueName: \"kubernetes.io/projected/b41ac68c-5535-42b1-81e3-c802c005f146-kube-api-access-thrww\") pod \"b41ac68c-5535-42b1-81e3-c802c005f146\" (UID: \"b41ac68c-5535-42b1-81e3-c802c005f146\") " Feb 28 05:00:03 crc kubenswrapper[5014]: I0228 05:00:03.480416 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b41ac68c-5535-42b1-81e3-c802c005f146-config-volume\") pod \"b41ac68c-5535-42b1-81e3-c802c005f146\" (UID: \"b41ac68c-5535-42b1-81e3-c802c005f146\") " Feb 28 05:00:03 crc kubenswrapper[5014]: I0228 05:00:03.480560 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b41ac68c-5535-42b1-81e3-c802c005f146-secret-volume\") pod \"b41ac68c-5535-42b1-81e3-c802c005f146\" (UID: \"b41ac68c-5535-42b1-81e3-c802c005f146\") " Feb 28 05:00:03 crc kubenswrapper[5014]: I0228 05:00:03.480840 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4809fe1e-9b8b-4fcb-9db3-2fa911f71be8-utilities\") pod \"redhat-marketplace-s27qd\" (UID: \"4809fe1e-9b8b-4fcb-9db3-2fa911f71be8\") " pod="openshift-marketplace/redhat-marketplace-s27qd" Feb 28 05:00:03 crc kubenswrapper[5014]: I0228 05:00:03.480920 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8d4g\" (UniqueName: \"kubernetes.io/projected/4809fe1e-9b8b-4fcb-9db3-2fa911f71be8-kube-api-access-j8d4g\") pod \"redhat-marketplace-s27qd\" (UID: \"4809fe1e-9b8b-4fcb-9db3-2fa911f71be8\") " pod="openshift-marketplace/redhat-marketplace-s27qd" Feb 28 05:00:03 crc kubenswrapper[5014]: I0228 05:00:03.480939 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4809fe1e-9b8b-4fcb-9db3-2fa911f71be8-catalog-content\") pod \"redhat-marketplace-s27qd\" (UID: \"4809fe1e-9b8b-4fcb-9db3-2fa911f71be8\") " pod="openshift-marketplace/redhat-marketplace-s27qd" Feb 28 05:00:03 crc kubenswrapper[5014]: I0228 05:00:03.481254 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b41ac68c-5535-42b1-81e3-c802c005f146-config-volume" (OuterVolumeSpecName: "config-volume") pod "b41ac68c-5535-42b1-81e3-c802c005f146" (UID: "b41ac68c-5535-42b1-81e3-c802c005f146"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 05:00:03 crc kubenswrapper[5014]: I0228 05:00:03.481439 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4809fe1e-9b8b-4fcb-9db3-2fa911f71be8-utilities\") pod \"redhat-marketplace-s27qd\" (UID: \"4809fe1e-9b8b-4fcb-9db3-2fa911f71be8\") " pod="openshift-marketplace/redhat-marketplace-s27qd" Feb 28 05:00:03 crc kubenswrapper[5014]: I0228 05:00:03.481447 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4809fe1e-9b8b-4fcb-9db3-2fa911f71be8-catalog-content\") pod \"redhat-marketplace-s27qd\" (UID: \"4809fe1e-9b8b-4fcb-9db3-2fa911f71be8\") " pod="openshift-marketplace/redhat-marketplace-s27qd" Feb 28 05:00:03 crc kubenswrapper[5014]: I0228 05:00:03.487014 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b41ac68c-5535-42b1-81e3-c802c005f146-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b41ac68c-5535-42b1-81e3-c802c005f146" (UID: "b41ac68c-5535-42b1-81e3-c802c005f146"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:00:03 crc kubenswrapper[5014]: I0228 05:00:03.490092 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b41ac68c-5535-42b1-81e3-c802c005f146-kube-api-access-thrww" (OuterVolumeSpecName: "kube-api-access-thrww") pod "b41ac68c-5535-42b1-81e3-c802c005f146" (UID: "b41ac68c-5535-42b1-81e3-c802c005f146"). InnerVolumeSpecName "kube-api-access-thrww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:00:03 crc kubenswrapper[5014]: I0228 05:00:03.502663 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8d4g\" (UniqueName: \"kubernetes.io/projected/4809fe1e-9b8b-4fcb-9db3-2fa911f71be8-kube-api-access-j8d4g\") pod \"redhat-marketplace-s27qd\" (UID: \"4809fe1e-9b8b-4fcb-9db3-2fa911f71be8\") " pod="openshift-marketplace/redhat-marketplace-s27qd" Feb 28 05:00:03 crc kubenswrapper[5014]: I0228 05:00:03.583175 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thrww\" (UniqueName: \"kubernetes.io/projected/b41ac68c-5535-42b1-81e3-c802c005f146-kube-api-access-thrww\") on node \"crc\" DevicePath \"\"" Feb 28 05:00:03 crc kubenswrapper[5014]: I0228 05:00:03.583222 5014 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b41ac68c-5535-42b1-81e3-c802c005f146-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 05:00:03 crc kubenswrapper[5014]: I0228 05:00:03.583234 5014 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b41ac68c-5535-42b1-81e3-c802c005f146-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 05:00:03 crc kubenswrapper[5014]: I0228 05:00:03.696286 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s27qd" Feb 28 05:00:04 crc kubenswrapper[5014]: I0228 05:00:04.010263 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537580-zsjhz" event={"ID":"b41ac68c-5535-42b1-81e3-c802c005f146","Type":"ContainerDied","Data":"5aa0c5553eba5c58d6de2836d448f618e2cd1d92aaf63b19a7b1f511d9dea603"} Feb 28 05:00:04 crc kubenswrapper[5014]: I0228 05:00:04.010580 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aa0c5553eba5c58d6de2836d448f618e2cd1d92aaf63b19a7b1f511d9dea603" Feb 28 05:00:04 crc kubenswrapper[5014]: I0228 05:00:04.011089 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537580-zsjhz" Feb 28 05:00:04 crc kubenswrapper[5014]: I0228 05:00:04.208345 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-s27qd"] Feb 28 05:00:04 crc kubenswrapper[5014]: W0228 05:00:04.209575 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4809fe1e_9b8b_4fcb_9db3_2fa911f71be8.slice/crio-aaf4c38331c757dfbce299273a5a539e4a700665641a05a629000b2ab6153ebf WatchSource:0}: Error finding container aaf4c38331c757dfbce299273a5a539e4a700665641a05a629000b2ab6153ebf: Status 404 returned error can't find the container with id aaf4c38331c757dfbce299273a5a539e4a700665641a05a629000b2ab6153ebf Feb 28 05:00:05 crc kubenswrapper[5014]: I0228 05:00:05.123480 5014 generic.go:334] "Generic (PLEG): container finished" podID="4809fe1e-9b8b-4fcb-9db3-2fa911f71be8" containerID="e1fbb12f30e739b8b18c19c9cbd3fcb761a85c3bf91e8352816faf1f3886bad6" exitCode=0 Feb 28 05:00:05 crc kubenswrapper[5014]: I0228 05:00:05.126449 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s27qd" event={"ID":"4809fe1e-9b8b-4fcb-9db3-2fa911f71be8","Type":"ContainerDied","Data":"e1fbb12f30e739b8b18c19c9cbd3fcb761a85c3bf91e8352816faf1f3886bad6"} Feb 28 05:00:05 crc kubenswrapper[5014]: I0228 05:00:05.126521 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s27qd" event={"ID":"4809fe1e-9b8b-4fcb-9db3-2fa911f71be8","Type":"ContainerStarted","Data":"aaf4c38331c757dfbce299273a5a539e4a700665641a05a629000b2ab6153ebf"} Feb 28 05:00:07 crc kubenswrapper[5014]: I0228 05:00:07.153798 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s27qd" event={"ID":"4809fe1e-9b8b-4fcb-9db3-2fa911f71be8","Type":"ContainerStarted","Data":"24d21171c5f48203aa76ad92d72129fa55fdce356f4775fe6c6056068099a262"} Feb 28 05:00:08 crc kubenswrapper[5014]: I0228 05:00:08.166198 5014 generic.go:334] "Generic (PLEG): container finished" podID="4809fe1e-9b8b-4fcb-9db3-2fa911f71be8" containerID="24d21171c5f48203aa76ad92d72129fa55fdce356f4775fe6c6056068099a262" exitCode=0 Feb 28 05:00:08 crc kubenswrapper[5014]: I0228 05:00:08.166329 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s27qd" event={"ID":"4809fe1e-9b8b-4fcb-9db3-2fa911f71be8","Type":"ContainerDied","Data":"24d21171c5f48203aa76ad92d72129fa55fdce356f4775fe6c6056068099a262"} Feb 28 05:00:09 crc kubenswrapper[5014]: I0228 05:00:09.180303 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s27qd" event={"ID":"4809fe1e-9b8b-4fcb-9db3-2fa911f71be8","Type":"ContainerStarted","Data":"f9187620b7ec29eee3485c677a8a85d169584dc2f3c0fe0fe62621f76f72bb9b"} Feb 28 05:00:09 crc kubenswrapper[5014]: I0228 05:00:09.221609 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-s27qd" podStartSLOduration=2.727626167 podStartE2EDuration="6.221586762s" podCreationTimestamp="2026-02-28 05:00:03 +0000 UTC" firstStartedPulling="2026-02-28 05:00:05.130388282 +0000 UTC m=+1593.800514192" lastFinishedPulling="2026-02-28 05:00:08.624348847 +0000 UTC m=+1597.294474787" observedRunningTime="2026-02-28 05:00:09.214056648 +0000 UTC m=+1597.884182558" watchObservedRunningTime="2026-02-28 05:00:09.221586762 +0000 UTC m=+1597.891712692" Feb 28 05:00:10 crc kubenswrapper[5014]: I0228 05:00:10.190842 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537580-fxfr8" event={"ID":"88defcda-3a2d-400f-8906-0c8c958c8f31","Type":"ContainerStarted","Data":"4336e634cd3bfc21cb020210e89a64d1b7796e122363abe138093d9133be63cb"} Feb 28 05:00:10 crc kubenswrapper[5014]: I0228 05:00:10.215181 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537580-fxfr8" podStartSLOduration=1.3680444409999999 podStartE2EDuration="10.215163395s" podCreationTimestamp="2026-02-28 05:00:00 +0000 UTC" firstStartedPulling="2026-02-28 05:00:01.002507848 +0000 UTC m=+1589.672633758" lastFinishedPulling="2026-02-28 05:00:09.849626802 +0000 UTC m=+1598.519752712" observedRunningTime="2026-02-28 05:00:10.211221807 +0000 UTC m=+1598.881347727" watchObservedRunningTime="2026-02-28 05:00:10.215163395 +0000 UTC m=+1598.885289315" Feb 28 05:00:11 crc kubenswrapper[5014]: I0228 05:00:11.199265 5014 generic.go:334] "Generic (PLEG): container finished" podID="88defcda-3a2d-400f-8906-0c8c958c8f31" containerID="4336e634cd3bfc21cb020210e89a64d1b7796e122363abe138093d9133be63cb" exitCode=0 Feb 28 05:00:11 crc kubenswrapper[5014]: I0228 05:00:11.199310 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537580-fxfr8" event={"ID":"88defcda-3a2d-400f-8906-0c8c958c8f31","Type":"ContainerDied","Data":"4336e634cd3bfc21cb020210e89a64d1b7796e122363abe138093d9133be63cb"} Feb 28 05:00:12 crc kubenswrapper[5014]: I0228 05:00:12.561610 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537580-fxfr8" Feb 28 05:00:12 crc kubenswrapper[5014]: I0228 05:00:12.657159 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zd6pt\" (UniqueName: \"kubernetes.io/projected/88defcda-3a2d-400f-8906-0c8c958c8f31-kube-api-access-zd6pt\") pod \"88defcda-3a2d-400f-8906-0c8c958c8f31\" (UID: \"88defcda-3a2d-400f-8906-0c8c958c8f31\") " Feb 28 05:00:12 crc kubenswrapper[5014]: I0228 05:00:12.663859 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88defcda-3a2d-400f-8906-0c8c958c8f31-kube-api-access-zd6pt" (OuterVolumeSpecName: "kube-api-access-zd6pt") pod "88defcda-3a2d-400f-8906-0c8c958c8f31" (UID: "88defcda-3a2d-400f-8906-0c8c958c8f31"). InnerVolumeSpecName "kube-api-access-zd6pt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:00:12 crc kubenswrapper[5014]: I0228 05:00:12.759787 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zd6pt\" (UniqueName: \"kubernetes.io/projected/88defcda-3a2d-400f-8906-0c8c958c8f31-kube-api-access-zd6pt\") on node \"crc\" DevicePath \"\"" Feb 28 05:00:13 crc kubenswrapper[5014]: I0228 05:00:13.220915 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537580-fxfr8" event={"ID":"88defcda-3a2d-400f-8906-0c8c958c8f31","Type":"ContainerDied","Data":"68ddac82fb3ea3551c61e6ebfefa766eaffb2c2e384490dc873be424d79f5b69"} Feb 28 05:00:13 crc kubenswrapper[5014]: I0228 05:00:13.221297 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68ddac82fb3ea3551c61e6ebfefa766eaffb2c2e384490dc873be424d79f5b69" Feb 28 05:00:13 crc kubenswrapper[5014]: I0228 05:00:13.220981 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537580-fxfr8" Feb 28 05:00:13 crc kubenswrapper[5014]: I0228 05:00:13.293495 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537574-x6j2n"] Feb 28 05:00:13 crc kubenswrapper[5014]: I0228 05:00:13.304602 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537574-x6j2n"] Feb 28 05:00:13 crc kubenswrapper[5014]: I0228 05:00:13.697111 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-s27qd" Feb 28 05:00:13 crc kubenswrapper[5014]: I0228 05:00:13.697184 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-s27qd" Feb 28 05:00:13 crc kubenswrapper[5014]: I0228 05:00:13.767873 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-s27qd" Feb 28 05:00:14 crc kubenswrapper[5014]: I0228 05:00:14.192324 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d33afd2-3959-4f00-8c82-1b46cb382721" path="/var/lib/kubelet/pods/5d33afd2-3959-4f00-8c82-1b46cb382721/volumes" Feb 28 05:00:14 crc kubenswrapper[5014]: I0228 05:00:14.284024 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-s27qd" Feb 28 05:00:14 crc kubenswrapper[5014]: I0228 05:00:14.355109 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s27qd"] Feb 28 05:00:15 crc kubenswrapper[5014]: I0228 05:00:15.706496 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:00:15 crc kubenswrapper[5014]: I0228 05:00:15.706582 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:00:16 crc kubenswrapper[5014]: I0228 05:00:16.245647 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-s27qd" podUID="4809fe1e-9b8b-4fcb-9db3-2fa911f71be8" containerName="registry-server" containerID="cri-o://f9187620b7ec29eee3485c677a8a85d169584dc2f3c0fe0fe62621f76f72bb9b" gracePeriod=2 Feb 28 05:00:16 crc kubenswrapper[5014]: I0228 05:00:16.778077 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s27qd" Feb 28 05:00:16 crc kubenswrapper[5014]: I0228 05:00:16.949380 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4809fe1e-9b8b-4fcb-9db3-2fa911f71be8-catalog-content\") pod \"4809fe1e-9b8b-4fcb-9db3-2fa911f71be8\" (UID: \"4809fe1e-9b8b-4fcb-9db3-2fa911f71be8\") " Feb 28 05:00:16 crc kubenswrapper[5014]: I0228 05:00:16.949514 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4809fe1e-9b8b-4fcb-9db3-2fa911f71be8-utilities\") pod \"4809fe1e-9b8b-4fcb-9db3-2fa911f71be8\" (UID: \"4809fe1e-9b8b-4fcb-9db3-2fa911f71be8\") " Feb 28 05:00:16 crc kubenswrapper[5014]: I0228 05:00:16.949599 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8d4g\" (UniqueName: \"kubernetes.io/projected/4809fe1e-9b8b-4fcb-9db3-2fa911f71be8-kube-api-access-j8d4g\") pod \"4809fe1e-9b8b-4fcb-9db3-2fa911f71be8\" (UID: \"4809fe1e-9b8b-4fcb-9db3-2fa911f71be8\") " Feb 28 05:00:16 crc kubenswrapper[5014]: I0228 05:00:16.950495 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4809fe1e-9b8b-4fcb-9db3-2fa911f71be8-utilities" (OuterVolumeSpecName: "utilities") pod "4809fe1e-9b8b-4fcb-9db3-2fa911f71be8" (UID: "4809fe1e-9b8b-4fcb-9db3-2fa911f71be8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:00:16 crc kubenswrapper[5014]: I0228 05:00:16.961943 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4809fe1e-9b8b-4fcb-9db3-2fa911f71be8-kube-api-access-j8d4g" (OuterVolumeSpecName: "kube-api-access-j8d4g") pod "4809fe1e-9b8b-4fcb-9db3-2fa911f71be8" (UID: "4809fe1e-9b8b-4fcb-9db3-2fa911f71be8"). InnerVolumeSpecName "kube-api-access-j8d4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:00:16 crc kubenswrapper[5014]: I0228 05:00:16.976331 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4809fe1e-9b8b-4fcb-9db3-2fa911f71be8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4809fe1e-9b8b-4fcb-9db3-2fa911f71be8" (UID: "4809fe1e-9b8b-4fcb-9db3-2fa911f71be8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:00:17 crc kubenswrapper[5014]: I0228 05:00:17.052023 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8d4g\" (UniqueName: \"kubernetes.io/projected/4809fe1e-9b8b-4fcb-9db3-2fa911f71be8-kube-api-access-j8d4g\") on node \"crc\" DevicePath \"\"" Feb 28 05:00:17 crc kubenswrapper[5014]: I0228 05:00:17.052089 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4809fe1e-9b8b-4fcb-9db3-2fa911f71be8-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 05:00:17 crc kubenswrapper[5014]: I0228 05:00:17.052098 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4809fe1e-9b8b-4fcb-9db3-2fa911f71be8-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 05:00:17 crc kubenswrapper[5014]: I0228 05:00:17.256449 5014 generic.go:334] "Generic (PLEG): container finished" podID="4809fe1e-9b8b-4fcb-9db3-2fa911f71be8" containerID="f9187620b7ec29eee3485c677a8a85d169584dc2f3c0fe0fe62621f76f72bb9b" exitCode=0 Feb 28 05:00:17 crc kubenswrapper[5014]: I0228 05:00:17.256487 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s27qd" event={"ID":"4809fe1e-9b8b-4fcb-9db3-2fa911f71be8","Type":"ContainerDied","Data":"f9187620b7ec29eee3485c677a8a85d169584dc2f3c0fe0fe62621f76f72bb9b"} Feb 28 05:00:17 crc kubenswrapper[5014]: I0228 05:00:17.256513 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-s27qd" event={"ID":"4809fe1e-9b8b-4fcb-9db3-2fa911f71be8","Type":"ContainerDied","Data":"aaf4c38331c757dfbce299273a5a539e4a700665641a05a629000b2ab6153ebf"} Feb 28 05:00:17 crc kubenswrapper[5014]: I0228 05:00:17.256523 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-s27qd" Feb 28 05:00:17 crc kubenswrapper[5014]: I0228 05:00:17.256531 5014 scope.go:117] "RemoveContainer" containerID="f9187620b7ec29eee3485c677a8a85d169584dc2f3c0fe0fe62621f76f72bb9b" Feb 28 05:00:17 crc kubenswrapper[5014]: I0228 05:00:17.290051 5014 scope.go:117] "RemoveContainer" containerID="24d21171c5f48203aa76ad92d72129fa55fdce356f4775fe6c6056068099a262" Feb 28 05:00:17 crc kubenswrapper[5014]: I0228 05:00:17.294183 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-s27qd"] Feb 28 05:00:17 crc kubenswrapper[5014]: I0228 05:00:17.312440 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-s27qd"] Feb 28 05:00:17 crc kubenswrapper[5014]: I0228 05:00:17.329345 5014 scope.go:117] "RemoveContainer" containerID="e1fbb12f30e739b8b18c19c9cbd3fcb761a85c3bf91e8352816faf1f3886bad6" Feb 28 05:00:17 crc kubenswrapper[5014]: I0228 05:00:17.356398 5014 scope.go:117] "RemoveContainer" containerID="f9187620b7ec29eee3485c677a8a85d169584dc2f3c0fe0fe62621f76f72bb9b" Feb 28 05:00:17 crc kubenswrapper[5014]: E0228 05:00:17.356784 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9187620b7ec29eee3485c677a8a85d169584dc2f3c0fe0fe62621f76f72bb9b\": container with ID starting with f9187620b7ec29eee3485c677a8a85d169584dc2f3c0fe0fe62621f76f72bb9b not found: ID does not exist" containerID="f9187620b7ec29eee3485c677a8a85d169584dc2f3c0fe0fe62621f76f72bb9b" Feb 28 05:00:17 crc kubenswrapper[5014]: I0228 05:00:17.356855 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9187620b7ec29eee3485c677a8a85d169584dc2f3c0fe0fe62621f76f72bb9b"} err="failed to get container status \"f9187620b7ec29eee3485c677a8a85d169584dc2f3c0fe0fe62621f76f72bb9b\": rpc error: code = NotFound desc = could not find container \"f9187620b7ec29eee3485c677a8a85d169584dc2f3c0fe0fe62621f76f72bb9b\": container with ID starting with f9187620b7ec29eee3485c677a8a85d169584dc2f3c0fe0fe62621f76f72bb9b not found: ID does not exist" Feb 28 05:00:17 crc kubenswrapper[5014]: I0228 05:00:17.356891 5014 scope.go:117] "RemoveContainer" containerID="24d21171c5f48203aa76ad92d72129fa55fdce356f4775fe6c6056068099a262" Feb 28 05:00:17 crc kubenswrapper[5014]: E0228 05:00:17.357381 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24d21171c5f48203aa76ad92d72129fa55fdce356f4775fe6c6056068099a262\": container with ID starting with 24d21171c5f48203aa76ad92d72129fa55fdce356f4775fe6c6056068099a262 not found: ID does not exist" containerID="24d21171c5f48203aa76ad92d72129fa55fdce356f4775fe6c6056068099a262" Feb 28 05:00:17 crc kubenswrapper[5014]: I0228 05:00:17.357427 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24d21171c5f48203aa76ad92d72129fa55fdce356f4775fe6c6056068099a262"} err="failed to get container status \"24d21171c5f48203aa76ad92d72129fa55fdce356f4775fe6c6056068099a262\": rpc error: code = NotFound desc = could not find container \"24d21171c5f48203aa76ad92d72129fa55fdce356f4775fe6c6056068099a262\": container with ID starting with 24d21171c5f48203aa76ad92d72129fa55fdce356f4775fe6c6056068099a262 not found: ID does not exist" Feb 28 05:00:17 crc kubenswrapper[5014]: I0228 05:00:17.357454 5014 scope.go:117] "RemoveContainer" containerID="e1fbb12f30e739b8b18c19c9cbd3fcb761a85c3bf91e8352816faf1f3886bad6" Feb 28 05:00:17 crc kubenswrapper[5014]: E0228 05:00:17.357727 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1fbb12f30e739b8b18c19c9cbd3fcb761a85c3bf91e8352816faf1f3886bad6\": container with ID starting with e1fbb12f30e739b8b18c19c9cbd3fcb761a85c3bf91e8352816faf1f3886bad6 not found: ID does not exist" containerID="e1fbb12f30e739b8b18c19c9cbd3fcb761a85c3bf91e8352816faf1f3886bad6" Feb 28 05:00:17 crc kubenswrapper[5014]: I0228 05:00:17.357774 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1fbb12f30e739b8b18c19c9cbd3fcb761a85c3bf91e8352816faf1f3886bad6"} err="failed to get container status \"e1fbb12f30e739b8b18c19c9cbd3fcb761a85c3bf91e8352816faf1f3886bad6\": rpc error: code = NotFound desc = could not find container \"e1fbb12f30e739b8b18c19c9cbd3fcb761a85c3bf91e8352816faf1f3886bad6\": container with ID starting with e1fbb12f30e739b8b18c19c9cbd3fcb761a85c3bf91e8352816faf1f3886bad6 not found: ID does not exist" Feb 28 05:00:18 crc kubenswrapper[5014]: I0228 05:00:18.183655 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4809fe1e-9b8b-4fcb-9db3-2fa911f71be8" path="/var/lib/kubelet/pods/4809fe1e-9b8b-4fcb-9db3-2fa911f71be8/volumes" Feb 28 05:00:45 crc kubenswrapper[5014]: I0228 05:00:45.707144 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:00:45 crc kubenswrapper[5014]: I0228 05:00:45.707772 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:00:54 crc kubenswrapper[5014]: I0228 05:00:54.869300 5014 scope.go:117] "RemoveContainer" containerID="2432b29de6293f4a28983a3721b988c96c505c8d540151c1682b7fef2ac9c405" Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.155112 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29537581-pvgzn"] Feb 28 05:01:00 crc kubenswrapper[5014]: E0228 05:01:00.156151 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4809fe1e-9b8b-4fcb-9db3-2fa911f71be8" containerName="extract-content" Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.156168 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="4809fe1e-9b8b-4fcb-9db3-2fa911f71be8" containerName="extract-content" Feb 28 05:01:00 crc kubenswrapper[5014]: E0228 05:01:00.156188 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88defcda-3a2d-400f-8906-0c8c958c8f31" containerName="oc" Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.156196 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="88defcda-3a2d-400f-8906-0c8c958c8f31" containerName="oc" Feb 28 05:01:00 crc kubenswrapper[5014]: E0228 05:01:00.156211 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4809fe1e-9b8b-4fcb-9db3-2fa911f71be8" containerName="registry-server" Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.156219 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="4809fe1e-9b8b-4fcb-9db3-2fa911f71be8" containerName="registry-server" Feb 28 05:01:00 crc kubenswrapper[5014]: E0228 05:01:00.156234 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b41ac68c-5535-42b1-81e3-c802c005f146" containerName="collect-profiles" Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.156242 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="b41ac68c-5535-42b1-81e3-c802c005f146" containerName="collect-profiles" Feb 28 05:01:00 crc kubenswrapper[5014]: E0228 05:01:00.156267 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4809fe1e-9b8b-4fcb-9db3-2fa911f71be8" containerName="extract-utilities" Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.156275 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="4809fe1e-9b8b-4fcb-9db3-2fa911f71be8" containerName="extract-utilities" Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.156514 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="b41ac68c-5535-42b1-81e3-c802c005f146" containerName="collect-profiles" Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.156539 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="88defcda-3a2d-400f-8906-0c8c958c8f31" containerName="oc" Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.156550 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="4809fe1e-9b8b-4fcb-9db3-2fa911f71be8" containerName="registry-server" Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.157449 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29537581-pvgzn" Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.172019 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29537581-pvgzn"] Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.246477 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhj7z\" (UniqueName: \"kubernetes.io/projected/b2a11b02-95d9-48f6-bb32-afa554e2ec2e-kube-api-access-dhj7z\") pod \"keystone-cron-29537581-pvgzn\" (UID: \"b2a11b02-95d9-48f6-bb32-afa554e2ec2e\") " pod="openstack/keystone-cron-29537581-pvgzn" Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.246539 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2a11b02-95d9-48f6-bb32-afa554e2ec2e-config-data\") pod \"keystone-cron-29537581-pvgzn\" (UID: \"b2a11b02-95d9-48f6-bb32-afa554e2ec2e\") " pod="openstack/keystone-cron-29537581-pvgzn" Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.246833 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2a11b02-95d9-48f6-bb32-afa554e2ec2e-combined-ca-bundle\") pod \"keystone-cron-29537581-pvgzn\" (UID: \"b2a11b02-95d9-48f6-bb32-afa554e2ec2e\") " pod="openstack/keystone-cron-29537581-pvgzn" Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.247073 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2a11b02-95d9-48f6-bb32-afa554e2ec2e-fernet-keys\") pod \"keystone-cron-29537581-pvgzn\" (UID: \"b2a11b02-95d9-48f6-bb32-afa554e2ec2e\") " pod="openstack/keystone-cron-29537581-pvgzn" Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.347988 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2a11b02-95d9-48f6-bb32-afa554e2ec2e-combined-ca-bundle\") pod \"keystone-cron-29537581-pvgzn\" (UID: \"b2a11b02-95d9-48f6-bb32-afa554e2ec2e\") " pod="openstack/keystone-cron-29537581-pvgzn" Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.348051 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2a11b02-95d9-48f6-bb32-afa554e2ec2e-fernet-keys\") pod \"keystone-cron-29537581-pvgzn\" (UID: \"b2a11b02-95d9-48f6-bb32-afa554e2ec2e\") " pod="openstack/keystone-cron-29537581-pvgzn" Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.348151 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhj7z\" (UniqueName: \"kubernetes.io/projected/b2a11b02-95d9-48f6-bb32-afa554e2ec2e-kube-api-access-dhj7z\") pod \"keystone-cron-29537581-pvgzn\" (UID: \"b2a11b02-95d9-48f6-bb32-afa554e2ec2e\") " pod="openstack/keystone-cron-29537581-pvgzn" Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.348167 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2a11b02-95d9-48f6-bb32-afa554e2ec2e-config-data\") pod \"keystone-cron-29537581-pvgzn\" (UID: \"b2a11b02-95d9-48f6-bb32-afa554e2ec2e\") " pod="openstack/keystone-cron-29537581-pvgzn" Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.354765 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2a11b02-95d9-48f6-bb32-afa554e2ec2e-combined-ca-bundle\") pod \"keystone-cron-29537581-pvgzn\" (UID: \"b2a11b02-95d9-48f6-bb32-afa554e2ec2e\") " pod="openstack/keystone-cron-29537581-pvgzn" Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.359743 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2a11b02-95d9-48f6-bb32-afa554e2ec2e-config-data\") pod \"keystone-cron-29537581-pvgzn\" (UID: \"b2a11b02-95d9-48f6-bb32-afa554e2ec2e\") " pod="openstack/keystone-cron-29537581-pvgzn" Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.370694 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2a11b02-95d9-48f6-bb32-afa554e2ec2e-fernet-keys\") pod \"keystone-cron-29537581-pvgzn\" (UID: \"b2a11b02-95d9-48f6-bb32-afa554e2ec2e\") " pod="openstack/keystone-cron-29537581-pvgzn" Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.376906 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhj7z\" (UniqueName: \"kubernetes.io/projected/b2a11b02-95d9-48f6-bb32-afa554e2ec2e-kube-api-access-dhj7z\") pod \"keystone-cron-29537581-pvgzn\" (UID: \"b2a11b02-95d9-48f6-bb32-afa554e2ec2e\") " pod="openstack/keystone-cron-29537581-pvgzn" Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.486435 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29537581-pvgzn" Feb 28 05:01:00 crc kubenswrapper[5014]: I0228 05:01:00.972016 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29537581-pvgzn"] Feb 28 05:01:00 crc kubenswrapper[5014]: W0228 05:01:00.977971 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2a11b02_95d9_48f6_bb32_afa554e2ec2e.slice/crio-460fa429636837a23e318ede2ccce66eac7f3989784ae9e7d5b9f05ea928b57d WatchSource:0}: Error finding container 460fa429636837a23e318ede2ccce66eac7f3989784ae9e7d5b9f05ea928b57d: Status 404 returned error can't find the container with id 460fa429636837a23e318ede2ccce66eac7f3989784ae9e7d5b9f05ea928b57d Feb 28 05:01:01 crc kubenswrapper[5014]: I0228 05:01:01.711003 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29537581-pvgzn" event={"ID":"b2a11b02-95d9-48f6-bb32-afa554e2ec2e","Type":"ContainerStarted","Data":"0e08dc4f66bde0a8b2a54593946bcea17981d1e925b3a156f6a23c5af8e52ec9"} Feb 28 05:01:01 crc kubenswrapper[5014]: I0228 05:01:01.711305 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29537581-pvgzn" event={"ID":"b2a11b02-95d9-48f6-bb32-afa554e2ec2e","Type":"ContainerStarted","Data":"460fa429636837a23e318ede2ccce66eac7f3989784ae9e7d5b9f05ea928b57d"} Feb 28 05:01:01 crc kubenswrapper[5014]: I0228 05:01:01.728554 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29537581-pvgzn" podStartSLOduration=1.728539721 podStartE2EDuration="1.728539721s" podCreationTimestamp="2026-02-28 05:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 05:01:01.725366485 +0000 UTC m=+1650.395492385" watchObservedRunningTime="2026-02-28 05:01:01.728539721 +0000 UTC m=+1650.398665631" Feb 28 05:01:03 crc kubenswrapper[5014]: I0228 05:01:03.731992 5014 generic.go:334] "Generic (PLEG): container finished" podID="b2a11b02-95d9-48f6-bb32-afa554e2ec2e" containerID="0e08dc4f66bde0a8b2a54593946bcea17981d1e925b3a156f6a23c5af8e52ec9" exitCode=0 Feb 28 05:01:03 crc kubenswrapper[5014]: I0228 05:01:03.732336 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29537581-pvgzn" event={"ID":"b2a11b02-95d9-48f6-bb32-afa554e2ec2e","Type":"ContainerDied","Data":"0e08dc4f66bde0a8b2a54593946bcea17981d1e925b3a156f6a23c5af8e52ec9"} Feb 28 05:01:05 crc kubenswrapper[5014]: I0228 05:01:05.751366 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29537581-pvgzn" event={"ID":"b2a11b02-95d9-48f6-bb32-afa554e2ec2e","Type":"ContainerDied","Data":"460fa429636837a23e318ede2ccce66eac7f3989784ae9e7d5b9f05ea928b57d"} Feb 28 05:01:05 crc kubenswrapper[5014]: I0228 05:01:05.751707 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="460fa429636837a23e318ede2ccce66eac7f3989784ae9e7d5b9f05ea928b57d" Feb 28 05:01:05 crc kubenswrapper[5014]: I0228 05:01:05.808675 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29537581-pvgzn" Feb 28 05:01:05 crc kubenswrapper[5014]: I0228 05:01:05.974223 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhj7z\" (UniqueName: \"kubernetes.io/projected/b2a11b02-95d9-48f6-bb32-afa554e2ec2e-kube-api-access-dhj7z\") pod \"b2a11b02-95d9-48f6-bb32-afa554e2ec2e\" (UID: \"b2a11b02-95d9-48f6-bb32-afa554e2ec2e\") " Feb 28 05:01:05 crc kubenswrapper[5014]: I0228 05:01:05.974506 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2a11b02-95d9-48f6-bb32-afa554e2ec2e-fernet-keys\") pod \"b2a11b02-95d9-48f6-bb32-afa554e2ec2e\" (UID: \"b2a11b02-95d9-48f6-bb32-afa554e2ec2e\") " Feb 28 05:01:05 crc kubenswrapper[5014]: I0228 05:01:05.974673 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2a11b02-95d9-48f6-bb32-afa554e2ec2e-combined-ca-bundle\") pod \"b2a11b02-95d9-48f6-bb32-afa554e2ec2e\" (UID: \"b2a11b02-95d9-48f6-bb32-afa554e2ec2e\") " Feb 28 05:01:05 crc kubenswrapper[5014]: I0228 05:01:05.974739 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2a11b02-95d9-48f6-bb32-afa554e2ec2e-config-data\") pod \"b2a11b02-95d9-48f6-bb32-afa554e2ec2e\" (UID: \"b2a11b02-95d9-48f6-bb32-afa554e2ec2e\") " Feb 28 05:01:05 crc kubenswrapper[5014]: I0228 05:01:05.984063 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2a11b02-95d9-48f6-bb32-afa554e2ec2e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b2a11b02-95d9-48f6-bb32-afa554e2ec2e" (UID: "b2a11b02-95d9-48f6-bb32-afa554e2ec2e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:01:05 crc kubenswrapper[5014]: I0228 05:01:05.993543 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2a11b02-95d9-48f6-bb32-afa554e2ec2e-kube-api-access-dhj7z" (OuterVolumeSpecName: "kube-api-access-dhj7z") pod "b2a11b02-95d9-48f6-bb32-afa554e2ec2e" (UID: "b2a11b02-95d9-48f6-bb32-afa554e2ec2e"). InnerVolumeSpecName "kube-api-access-dhj7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:01:06 crc kubenswrapper[5014]: I0228 05:01:06.016256 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2a11b02-95d9-48f6-bb32-afa554e2ec2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2a11b02-95d9-48f6-bb32-afa554e2ec2e" (UID: "b2a11b02-95d9-48f6-bb32-afa554e2ec2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:01:06 crc kubenswrapper[5014]: I0228 05:01:06.033186 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2a11b02-95d9-48f6-bb32-afa554e2ec2e-config-data" (OuterVolumeSpecName: "config-data") pod "b2a11b02-95d9-48f6-bb32-afa554e2ec2e" (UID: "b2a11b02-95d9-48f6-bb32-afa554e2ec2e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:01:06 crc kubenswrapper[5014]: I0228 05:01:06.076833 5014 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2a11b02-95d9-48f6-bb32-afa554e2ec2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 05:01:06 crc kubenswrapper[5014]: I0228 05:01:06.076913 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2a11b02-95d9-48f6-bb32-afa554e2ec2e-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 05:01:06 crc kubenswrapper[5014]: I0228 05:01:06.076927 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhj7z\" (UniqueName: \"kubernetes.io/projected/b2a11b02-95d9-48f6-bb32-afa554e2ec2e-kube-api-access-dhj7z\") on node \"crc\" DevicePath \"\"" Feb 28 05:01:06 crc kubenswrapper[5014]: I0228 05:01:06.076944 5014 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b2a11b02-95d9-48f6-bb32-afa554e2ec2e-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 28 05:01:06 crc kubenswrapper[5014]: I0228 05:01:06.763501 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29537581-pvgzn" Feb 28 05:01:07 crc kubenswrapper[5014]: I0228 05:01:07.772265 5014 generic.go:334] "Generic (PLEG): container finished" podID="71fc0e19-253e-4cae-b6ee-7efc24398ffa" containerID="bc7bf45765239c6b931912259c1d44a292804886d01d8f87a68cf158c121b8c5" exitCode=0 Feb 28 05:01:07 crc kubenswrapper[5014]: I0228 05:01:07.772390 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv" event={"ID":"71fc0e19-253e-4cae-b6ee-7efc24398ffa","Type":"ContainerDied","Data":"bc7bf45765239c6b931912259c1d44a292804886d01d8f87a68cf158c121b8c5"} Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.241300 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv" Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.443140 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71fc0e19-253e-4cae-b6ee-7efc24398ffa-bootstrap-combined-ca-bundle\") pod \"71fc0e19-253e-4cae-b6ee-7efc24398ffa\" (UID: \"71fc0e19-253e-4cae-b6ee-7efc24398ffa\") " Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.443237 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pfx4\" (UniqueName: \"kubernetes.io/projected/71fc0e19-253e-4cae-b6ee-7efc24398ffa-kube-api-access-9pfx4\") pod \"71fc0e19-253e-4cae-b6ee-7efc24398ffa\" (UID: \"71fc0e19-253e-4cae-b6ee-7efc24398ffa\") " Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.443290 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71fc0e19-253e-4cae-b6ee-7efc24398ffa-ssh-key-openstack-edpm-ipam\") pod \"71fc0e19-253e-4cae-b6ee-7efc24398ffa\" (UID: \"71fc0e19-253e-4cae-b6ee-7efc24398ffa\") " Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.443370 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71fc0e19-253e-4cae-b6ee-7efc24398ffa-inventory\") pod \"71fc0e19-253e-4cae-b6ee-7efc24398ffa\" (UID: \"71fc0e19-253e-4cae-b6ee-7efc24398ffa\") " Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.451000 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71fc0e19-253e-4cae-b6ee-7efc24398ffa-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "71fc0e19-253e-4cae-b6ee-7efc24398ffa" (UID: "71fc0e19-253e-4cae-b6ee-7efc24398ffa"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.459525 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71fc0e19-253e-4cae-b6ee-7efc24398ffa-kube-api-access-9pfx4" (OuterVolumeSpecName: "kube-api-access-9pfx4") pod "71fc0e19-253e-4cae-b6ee-7efc24398ffa" (UID: "71fc0e19-253e-4cae-b6ee-7efc24398ffa"). InnerVolumeSpecName "kube-api-access-9pfx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.481036 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71fc0e19-253e-4cae-b6ee-7efc24398ffa-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "71fc0e19-253e-4cae-b6ee-7efc24398ffa" (UID: "71fc0e19-253e-4cae-b6ee-7efc24398ffa"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.483910 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71fc0e19-253e-4cae-b6ee-7efc24398ffa-inventory" (OuterVolumeSpecName: "inventory") pod "71fc0e19-253e-4cae-b6ee-7efc24398ffa" (UID: "71fc0e19-253e-4cae-b6ee-7efc24398ffa"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.545234 5014 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/71fc0e19-253e-4cae-b6ee-7efc24398ffa-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.545827 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pfx4\" (UniqueName: \"kubernetes.io/projected/71fc0e19-253e-4cae-b6ee-7efc24398ffa-kube-api-access-9pfx4\") on node \"crc\" DevicePath \"\"" Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.545843 5014 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/71fc0e19-253e-4cae-b6ee-7efc24398ffa-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.545854 5014 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/71fc0e19-253e-4cae-b6ee-7efc24398ffa-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.799937 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv" event={"ID":"71fc0e19-253e-4cae-b6ee-7efc24398ffa","Type":"ContainerDied","Data":"54a7affbb8156ac7a89c29062a969b0126c68f1ac5b084b299660c07a8298224"} Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.799996 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54a7affbb8156ac7a89c29062a969b0126c68f1ac5b084b299660c07a8298224" Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.800002 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv" Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.900338 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq52b"] Feb 28 05:01:09 crc kubenswrapper[5014]: E0228 05:01:09.903300 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2a11b02-95d9-48f6-bb32-afa554e2ec2e" containerName="keystone-cron" Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.903336 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2a11b02-95d9-48f6-bb32-afa554e2ec2e" containerName="keystone-cron" Feb 28 05:01:09 crc kubenswrapper[5014]: E0228 05:01:09.903372 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71fc0e19-253e-4cae-b6ee-7efc24398ffa" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.903387 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="71fc0e19-253e-4cae-b6ee-7efc24398ffa" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.903801 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2a11b02-95d9-48f6-bb32-afa554e2ec2e" containerName="keystone-cron" Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.903866 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="71fc0e19-253e-4cae-b6ee-7efc24398ffa" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.904753 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq52b" Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.907769 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.908407 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.910106 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6dz6b" Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.913403 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 05:01:09 crc kubenswrapper[5014]: I0228 05:01:09.916309 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq52b"] Feb 28 05:01:10 crc kubenswrapper[5014]: I0228 05:01:10.057090 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqbxq\" (UniqueName: \"kubernetes.io/projected/92c43e33-7947-4ad2-984a-e2618b76f368-kube-api-access-sqbxq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fq52b\" (UID: \"92c43e33-7947-4ad2-984a-e2618b76f368\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq52b" Feb 28 05:01:10 crc kubenswrapper[5014]: I0228 05:01:10.057174 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92c43e33-7947-4ad2-984a-e2618b76f368-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fq52b\" (UID: \"92c43e33-7947-4ad2-984a-e2618b76f368\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq52b" Feb 28 05:01:10 crc kubenswrapper[5014]: I0228 05:01:10.057592 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92c43e33-7947-4ad2-984a-e2618b76f368-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fq52b\" (UID: \"92c43e33-7947-4ad2-984a-e2618b76f368\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq52b" Feb 28 05:01:10 crc kubenswrapper[5014]: I0228 05:01:10.161427 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92c43e33-7947-4ad2-984a-e2618b76f368-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fq52b\" (UID: \"92c43e33-7947-4ad2-984a-e2618b76f368\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq52b" Feb 28 05:01:10 crc kubenswrapper[5014]: I0228 05:01:10.161701 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqbxq\" (UniqueName: \"kubernetes.io/projected/92c43e33-7947-4ad2-984a-e2618b76f368-kube-api-access-sqbxq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fq52b\" (UID: \"92c43e33-7947-4ad2-984a-e2618b76f368\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq52b" Feb 28 05:01:10 crc kubenswrapper[5014]: I0228 05:01:10.161788 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92c43e33-7947-4ad2-984a-e2618b76f368-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fq52b\" (UID: \"92c43e33-7947-4ad2-984a-e2618b76f368\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq52b" Feb 28 05:01:10 crc kubenswrapper[5014]: I0228 05:01:10.166419 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92c43e33-7947-4ad2-984a-e2618b76f368-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fq52b\" (UID: \"92c43e33-7947-4ad2-984a-e2618b76f368\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq52b" Feb 28 05:01:10 crc kubenswrapper[5014]: I0228 05:01:10.174463 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92c43e33-7947-4ad2-984a-e2618b76f368-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fq52b\" (UID: \"92c43e33-7947-4ad2-984a-e2618b76f368\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq52b" Feb 28 05:01:10 crc kubenswrapper[5014]: I0228 05:01:10.182696 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqbxq\" (UniqueName: \"kubernetes.io/projected/92c43e33-7947-4ad2-984a-e2618b76f368-kube-api-access-sqbxq\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fq52b\" (UID: \"92c43e33-7947-4ad2-984a-e2618b76f368\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq52b" Feb 28 05:01:10 crc kubenswrapper[5014]: I0228 05:01:10.228720 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq52b" Feb 28 05:01:10 crc kubenswrapper[5014]: I0228 05:01:10.777297 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq52b"] Feb 28 05:01:10 crc kubenswrapper[5014]: I0228 05:01:10.809331 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq52b" event={"ID":"92c43e33-7947-4ad2-984a-e2618b76f368","Type":"ContainerStarted","Data":"570ee847e6b36f0e94bb439081aaf20750eda45c94caaa99750c8d81c53e1711"} Feb 28 05:01:11 crc kubenswrapper[5014]: I0228 05:01:11.821926 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq52b" event={"ID":"92c43e33-7947-4ad2-984a-e2618b76f368","Type":"ContainerStarted","Data":"bbc25012e3c32f3a0a26f0ee9952ec4f8ff10a50e47536dd635656a795922072"} Feb 28 05:01:11 crc kubenswrapper[5014]: I0228 05:01:11.847183 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq52b" podStartSLOduration=2.362119841 podStartE2EDuration="2.847155553s" podCreationTimestamp="2026-02-28 05:01:09 +0000 UTC" firstStartedPulling="2026-02-28 05:01:10.785431043 +0000 UTC m=+1659.455556953" lastFinishedPulling="2026-02-28 05:01:11.270466745 +0000 UTC m=+1659.940592665" observedRunningTime="2026-02-28 05:01:11.838419437 +0000 UTC m=+1660.508545387" watchObservedRunningTime="2026-02-28 05:01:11.847155553 +0000 UTC m=+1660.517281493" Feb 28 05:01:14 crc kubenswrapper[5014]: I0228 05:01:14.367532 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-84sxp"] Feb 28 05:01:14 crc kubenswrapper[5014]: I0228 05:01:14.371067 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84sxp" Feb 28 05:01:14 crc kubenswrapper[5014]: I0228 05:01:14.381389 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-84sxp"] Feb 28 05:01:14 crc kubenswrapper[5014]: I0228 05:01:14.542448 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f07906b1-fbe5-4a53-a0a6-a61230dc0e55-utilities\") pod \"certified-operators-84sxp\" (UID: \"f07906b1-fbe5-4a53-a0a6-a61230dc0e55\") " pod="openshift-marketplace/certified-operators-84sxp" Feb 28 05:01:14 crc kubenswrapper[5014]: I0228 05:01:14.542524 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxphm\" (UniqueName: \"kubernetes.io/projected/f07906b1-fbe5-4a53-a0a6-a61230dc0e55-kube-api-access-nxphm\") pod \"certified-operators-84sxp\" (UID: \"f07906b1-fbe5-4a53-a0a6-a61230dc0e55\") " pod="openshift-marketplace/certified-operators-84sxp" Feb 28 05:01:14 crc kubenswrapper[5014]: I0228 05:01:14.542589 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f07906b1-fbe5-4a53-a0a6-a61230dc0e55-catalog-content\") pod \"certified-operators-84sxp\" (UID: \"f07906b1-fbe5-4a53-a0a6-a61230dc0e55\") " pod="openshift-marketplace/certified-operators-84sxp" Feb 28 05:01:14 crc kubenswrapper[5014]: I0228 05:01:14.643426 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f07906b1-fbe5-4a53-a0a6-a61230dc0e55-utilities\") pod \"certified-operators-84sxp\" (UID: \"f07906b1-fbe5-4a53-a0a6-a61230dc0e55\") " pod="openshift-marketplace/certified-operators-84sxp" Feb 28 05:01:14 crc kubenswrapper[5014]: I0228 05:01:14.643479 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxphm\" (UniqueName: \"kubernetes.io/projected/f07906b1-fbe5-4a53-a0a6-a61230dc0e55-kube-api-access-nxphm\") pod \"certified-operators-84sxp\" (UID: \"f07906b1-fbe5-4a53-a0a6-a61230dc0e55\") " pod="openshift-marketplace/certified-operators-84sxp" Feb 28 05:01:14 crc kubenswrapper[5014]: I0228 05:01:14.643528 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f07906b1-fbe5-4a53-a0a6-a61230dc0e55-catalog-content\") pod \"certified-operators-84sxp\" (UID: \"f07906b1-fbe5-4a53-a0a6-a61230dc0e55\") " pod="openshift-marketplace/certified-operators-84sxp" Feb 28 05:01:14 crc kubenswrapper[5014]: I0228 05:01:14.644132 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f07906b1-fbe5-4a53-a0a6-a61230dc0e55-catalog-content\") pod \"certified-operators-84sxp\" (UID: \"f07906b1-fbe5-4a53-a0a6-a61230dc0e55\") " pod="openshift-marketplace/certified-operators-84sxp" Feb 28 05:01:14 crc kubenswrapper[5014]: I0228 05:01:14.644156 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f07906b1-fbe5-4a53-a0a6-a61230dc0e55-utilities\") pod \"certified-operators-84sxp\" (UID: \"f07906b1-fbe5-4a53-a0a6-a61230dc0e55\") " pod="openshift-marketplace/certified-operators-84sxp" Feb 28 05:01:14 crc kubenswrapper[5014]: I0228 05:01:14.666541 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxphm\" (UniqueName: \"kubernetes.io/projected/f07906b1-fbe5-4a53-a0a6-a61230dc0e55-kube-api-access-nxphm\") pod \"certified-operators-84sxp\" (UID: \"f07906b1-fbe5-4a53-a0a6-a61230dc0e55\") " pod="openshift-marketplace/certified-operators-84sxp" Feb 28 05:01:14 crc kubenswrapper[5014]: I0228 05:01:14.693347 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84sxp" Feb 28 05:01:15 crc kubenswrapper[5014]: I0228 05:01:15.204699 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-84sxp"] Feb 28 05:01:15 crc kubenswrapper[5014]: I0228 05:01:15.706668 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:01:15 crc kubenswrapper[5014]: I0228 05:01:15.707009 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:01:15 crc kubenswrapper[5014]: I0228 05:01:15.707055 5014 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 05:01:15 crc kubenswrapper[5014]: I0228 05:01:15.707703 5014 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0"} pod="openshift-machine-config-operator/machine-config-daemon-cct62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 05:01:15 crc kubenswrapper[5014]: I0228 05:01:15.707761 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" containerID="cri-o://831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" gracePeriod=600 Feb 28 05:01:15 crc kubenswrapper[5014]: E0228 05:01:15.827994 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:01:15 crc kubenswrapper[5014]: I0228 05:01:15.882417 5014 generic.go:334] "Generic (PLEG): container finished" podID="6aad0009-d904-48f8-8e30-82205907ece1" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" exitCode=0 Feb 28 05:01:15 crc kubenswrapper[5014]: I0228 05:01:15.882481 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerDied","Data":"831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0"} Feb 28 05:01:15 crc kubenswrapper[5014]: I0228 05:01:15.882542 5014 scope.go:117] "RemoveContainer" containerID="a1900058ed5d5055efcaa8b7a5a928b3456052935d481ae9dedaea0c3e448c54" Feb 28 05:01:15 crc kubenswrapper[5014]: I0228 05:01:15.883236 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:01:15 crc kubenswrapper[5014]: E0228 05:01:15.883529 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:01:15 crc kubenswrapper[5014]: I0228 05:01:15.887222 5014 generic.go:334] "Generic (PLEG): container finished" podID="f07906b1-fbe5-4a53-a0a6-a61230dc0e55" containerID="6ff7e47e56944c798d06614e02e073896c37b53eb2fb6fdb3de71884fdd7c17b" exitCode=0 Feb 28 05:01:15 crc kubenswrapper[5014]: I0228 05:01:15.887262 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84sxp" event={"ID":"f07906b1-fbe5-4a53-a0a6-a61230dc0e55","Type":"ContainerDied","Data":"6ff7e47e56944c798d06614e02e073896c37b53eb2fb6fdb3de71884fdd7c17b"} Feb 28 05:01:15 crc kubenswrapper[5014]: I0228 05:01:15.887290 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84sxp" event={"ID":"f07906b1-fbe5-4a53-a0a6-a61230dc0e55","Type":"ContainerStarted","Data":"3146283673ed0e717b23dd69178c84505341cc229de352cf6d62145113e47858"} Feb 28 05:01:16 crc kubenswrapper[5014]: I0228 05:01:16.897457 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84sxp" event={"ID":"f07906b1-fbe5-4a53-a0a6-a61230dc0e55","Type":"ContainerStarted","Data":"47be39cdda5820366ea1a8d31dfde64afe167389b9a7e562ac5a359c68c7b364"} Feb 28 05:01:17 crc kubenswrapper[5014]: I0228 05:01:17.913312 5014 generic.go:334] "Generic (PLEG): container finished" podID="f07906b1-fbe5-4a53-a0a6-a61230dc0e55" containerID="47be39cdda5820366ea1a8d31dfde64afe167389b9a7e562ac5a359c68c7b364" exitCode=0 Feb 28 05:01:17 crc kubenswrapper[5014]: I0228 05:01:17.913622 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84sxp" event={"ID":"f07906b1-fbe5-4a53-a0a6-a61230dc0e55","Type":"ContainerDied","Data":"47be39cdda5820366ea1a8d31dfde64afe167389b9a7e562ac5a359c68c7b364"} Feb 28 05:01:18 crc kubenswrapper[5014]: I0228 05:01:18.922384 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84sxp" event={"ID":"f07906b1-fbe5-4a53-a0a6-a61230dc0e55","Type":"ContainerStarted","Data":"589598a192c8d64303a9d3669d08eecb3bacaf3768ecbd020a1f6ef08c211c55"} Feb 28 05:01:24 crc kubenswrapper[5014]: I0228 05:01:24.694218 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-84sxp" Feb 28 05:01:24 crc kubenswrapper[5014]: I0228 05:01:24.695135 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-84sxp" Feb 28 05:01:24 crc kubenswrapper[5014]: I0228 05:01:24.766394 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-84sxp" Feb 28 05:01:24 crc kubenswrapper[5014]: I0228 05:01:24.817583 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-84sxp" podStartSLOduration=8.261088063 podStartE2EDuration="10.817563056s" podCreationTimestamp="2026-02-28 05:01:14 +0000 UTC" firstStartedPulling="2026-02-28 05:01:15.890144145 +0000 UTC m=+1664.560270055" lastFinishedPulling="2026-02-28 05:01:18.446619138 +0000 UTC m=+1667.116745048" observedRunningTime="2026-02-28 05:01:18.967992736 +0000 UTC m=+1667.638118656" watchObservedRunningTime="2026-02-28 05:01:24.817563056 +0000 UTC m=+1673.487688976" Feb 28 05:01:25 crc kubenswrapper[5014]: I0228 05:01:25.042476 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-84sxp" Feb 28 05:01:25 crc kubenswrapper[5014]: I0228 05:01:25.097411 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-84sxp"] Feb 28 05:01:26 crc kubenswrapper[5014]: I0228 05:01:26.995640 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-84sxp" podUID="f07906b1-fbe5-4a53-a0a6-a61230dc0e55" containerName="registry-server" containerID="cri-o://589598a192c8d64303a9d3669d08eecb3bacaf3768ecbd020a1f6ef08c211c55" gracePeriod=2 Feb 28 05:01:27 crc kubenswrapper[5014]: I0228 05:01:27.469626 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84sxp" Feb 28 05:01:27 crc kubenswrapper[5014]: I0228 05:01:27.640769 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f07906b1-fbe5-4a53-a0a6-a61230dc0e55-catalog-content\") pod \"f07906b1-fbe5-4a53-a0a6-a61230dc0e55\" (UID: \"f07906b1-fbe5-4a53-a0a6-a61230dc0e55\") " Feb 28 05:01:27 crc kubenswrapper[5014]: I0228 05:01:27.640953 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nxphm\" (UniqueName: \"kubernetes.io/projected/f07906b1-fbe5-4a53-a0a6-a61230dc0e55-kube-api-access-nxphm\") pod \"f07906b1-fbe5-4a53-a0a6-a61230dc0e55\" (UID: \"f07906b1-fbe5-4a53-a0a6-a61230dc0e55\") " Feb 28 05:01:27 crc kubenswrapper[5014]: I0228 05:01:27.640985 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f07906b1-fbe5-4a53-a0a6-a61230dc0e55-utilities\") pod \"f07906b1-fbe5-4a53-a0a6-a61230dc0e55\" (UID: \"f07906b1-fbe5-4a53-a0a6-a61230dc0e55\") " Feb 28 05:01:27 crc kubenswrapper[5014]: I0228 05:01:27.642798 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f07906b1-fbe5-4a53-a0a6-a61230dc0e55-utilities" (OuterVolumeSpecName: "utilities") pod "f07906b1-fbe5-4a53-a0a6-a61230dc0e55" (UID: "f07906b1-fbe5-4a53-a0a6-a61230dc0e55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:01:27 crc kubenswrapper[5014]: I0228 05:01:27.646455 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f07906b1-fbe5-4a53-a0a6-a61230dc0e55-kube-api-access-nxphm" (OuterVolumeSpecName: "kube-api-access-nxphm") pod "f07906b1-fbe5-4a53-a0a6-a61230dc0e55" (UID: "f07906b1-fbe5-4a53-a0a6-a61230dc0e55"). InnerVolumeSpecName "kube-api-access-nxphm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:01:27 crc kubenswrapper[5014]: I0228 05:01:27.700966 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f07906b1-fbe5-4a53-a0a6-a61230dc0e55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f07906b1-fbe5-4a53-a0a6-a61230dc0e55" (UID: "f07906b1-fbe5-4a53-a0a6-a61230dc0e55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:01:27 crc kubenswrapper[5014]: I0228 05:01:27.743513 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nxphm\" (UniqueName: \"kubernetes.io/projected/f07906b1-fbe5-4a53-a0a6-a61230dc0e55-kube-api-access-nxphm\") on node \"crc\" DevicePath \"\"" Feb 28 05:01:27 crc kubenswrapper[5014]: I0228 05:01:27.743570 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f07906b1-fbe5-4a53-a0a6-a61230dc0e55-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 05:01:27 crc kubenswrapper[5014]: I0228 05:01:27.743592 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f07906b1-fbe5-4a53-a0a6-a61230dc0e55-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 05:01:28 crc kubenswrapper[5014]: I0228 05:01:28.011683 5014 generic.go:334] "Generic (PLEG): container finished" podID="f07906b1-fbe5-4a53-a0a6-a61230dc0e55" containerID="589598a192c8d64303a9d3669d08eecb3bacaf3768ecbd020a1f6ef08c211c55" exitCode=0 Feb 28 05:01:28 crc kubenswrapper[5014]: I0228 05:01:28.011734 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84sxp" event={"ID":"f07906b1-fbe5-4a53-a0a6-a61230dc0e55","Type":"ContainerDied","Data":"589598a192c8d64303a9d3669d08eecb3bacaf3768ecbd020a1f6ef08c211c55"} Feb 28 05:01:28 crc kubenswrapper[5014]: I0228 05:01:28.011767 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-84sxp" event={"ID":"f07906b1-fbe5-4a53-a0a6-a61230dc0e55","Type":"ContainerDied","Data":"3146283673ed0e717b23dd69178c84505341cc229de352cf6d62145113e47858"} Feb 28 05:01:28 crc kubenswrapper[5014]: I0228 05:01:28.011790 5014 scope.go:117] "RemoveContainer" containerID="589598a192c8d64303a9d3669d08eecb3bacaf3768ecbd020a1f6ef08c211c55" Feb 28 05:01:28 crc kubenswrapper[5014]: I0228 05:01:28.011901 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-84sxp" Feb 28 05:01:28 crc kubenswrapper[5014]: I0228 05:01:28.031424 5014 scope.go:117] "RemoveContainer" containerID="47be39cdda5820366ea1a8d31dfde64afe167389b9a7e562ac5a359c68c7b364" Feb 28 05:01:28 crc kubenswrapper[5014]: I0228 05:01:28.042011 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-84sxp"] Feb 28 05:01:28 crc kubenswrapper[5014]: I0228 05:01:28.059130 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-84sxp"] Feb 28 05:01:28 crc kubenswrapper[5014]: I0228 05:01:28.062436 5014 scope.go:117] "RemoveContainer" containerID="6ff7e47e56944c798d06614e02e073896c37b53eb2fb6fdb3de71884fdd7c17b" Feb 28 05:01:28 crc kubenswrapper[5014]: I0228 05:01:28.106576 5014 scope.go:117] "RemoveContainer" containerID="589598a192c8d64303a9d3669d08eecb3bacaf3768ecbd020a1f6ef08c211c55" Feb 28 05:01:28 crc kubenswrapper[5014]: E0228 05:01:28.107116 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"589598a192c8d64303a9d3669d08eecb3bacaf3768ecbd020a1f6ef08c211c55\": container with ID starting with 589598a192c8d64303a9d3669d08eecb3bacaf3768ecbd020a1f6ef08c211c55 not found: ID does not exist" containerID="589598a192c8d64303a9d3669d08eecb3bacaf3768ecbd020a1f6ef08c211c55" Feb 28 05:01:28 crc kubenswrapper[5014]: I0228 05:01:28.107164 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"589598a192c8d64303a9d3669d08eecb3bacaf3768ecbd020a1f6ef08c211c55"} err="failed to get container status \"589598a192c8d64303a9d3669d08eecb3bacaf3768ecbd020a1f6ef08c211c55\": rpc error: code = NotFound desc = could not find container \"589598a192c8d64303a9d3669d08eecb3bacaf3768ecbd020a1f6ef08c211c55\": container with ID starting with 589598a192c8d64303a9d3669d08eecb3bacaf3768ecbd020a1f6ef08c211c55 not found: ID does not exist" Feb 28 05:01:28 crc kubenswrapper[5014]: I0228 05:01:28.107189 5014 scope.go:117] "RemoveContainer" containerID="47be39cdda5820366ea1a8d31dfde64afe167389b9a7e562ac5a359c68c7b364" Feb 28 05:01:28 crc kubenswrapper[5014]: E0228 05:01:28.107579 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47be39cdda5820366ea1a8d31dfde64afe167389b9a7e562ac5a359c68c7b364\": container with ID starting with 47be39cdda5820366ea1a8d31dfde64afe167389b9a7e562ac5a359c68c7b364 not found: ID does not exist" containerID="47be39cdda5820366ea1a8d31dfde64afe167389b9a7e562ac5a359c68c7b364" Feb 28 05:01:28 crc kubenswrapper[5014]: I0228 05:01:28.107609 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47be39cdda5820366ea1a8d31dfde64afe167389b9a7e562ac5a359c68c7b364"} err="failed to get container status \"47be39cdda5820366ea1a8d31dfde64afe167389b9a7e562ac5a359c68c7b364\": rpc error: code = NotFound desc = could not find container \"47be39cdda5820366ea1a8d31dfde64afe167389b9a7e562ac5a359c68c7b364\": container with ID starting with 47be39cdda5820366ea1a8d31dfde64afe167389b9a7e562ac5a359c68c7b364 not found: ID does not exist" Feb 28 05:01:28 crc kubenswrapper[5014]: I0228 05:01:28.107628 5014 scope.go:117] "RemoveContainer" containerID="6ff7e47e56944c798d06614e02e073896c37b53eb2fb6fdb3de71884fdd7c17b" Feb 28 05:01:28 crc kubenswrapper[5014]: E0228 05:01:28.108030 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ff7e47e56944c798d06614e02e073896c37b53eb2fb6fdb3de71884fdd7c17b\": container with ID starting with 6ff7e47e56944c798d06614e02e073896c37b53eb2fb6fdb3de71884fdd7c17b not found: ID does not exist" containerID="6ff7e47e56944c798d06614e02e073896c37b53eb2fb6fdb3de71884fdd7c17b" Feb 28 05:01:28 crc kubenswrapper[5014]: I0228 05:01:28.108061 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ff7e47e56944c798d06614e02e073896c37b53eb2fb6fdb3de71884fdd7c17b"} err="failed to get container status \"6ff7e47e56944c798d06614e02e073896c37b53eb2fb6fdb3de71884fdd7c17b\": rpc error: code = NotFound desc = could not find container \"6ff7e47e56944c798d06614e02e073896c37b53eb2fb6fdb3de71884fdd7c17b\": container with ID starting with 6ff7e47e56944c798d06614e02e073896c37b53eb2fb6fdb3de71884fdd7c17b not found: ID does not exist" Feb 28 05:01:28 crc kubenswrapper[5014]: I0228 05:01:28.184548 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:01:28 crc kubenswrapper[5014]: E0228 05:01:28.185090 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:01:28 crc kubenswrapper[5014]: I0228 05:01:28.194177 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f07906b1-fbe5-4a53-a0a6-a61230dc0e55" path="/var/lib/kubelet/pods/f07906b1-fbe5-4a53-a0a6-a61230dc0e55/volumes" Feb 28 05:01:41 crc kubenswrapper[5014]: I0228 05:01:41.172074 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:01:41 crc kubenswrapper[5014]: E0228 05:01:41.172888 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:01:55 crc kubenswrapper[5014]: I0228 05:01:55.078284 5014 scope.go:117] "RemoveContainer" containerID="3336e496711e2ad799de320a9cb7fcdfc3046fdddf17c6e42eb0becbe91e7955" Feb 28 05:01:55 crc kubenswrapper[5014]: I0228 05:01:55.117715 5014 scope.go:117] "RemoveContainer" containerID="a32e77190dcee6357cc24518590a684cc445a736e5d603448cf1d2e7f3ea4c94" Feb 28 05:01:55 crc kubenswrapper[5014]: I0228 05:01:55.147034 5014 scope.go:117] "RemoveContainer" containerID="adae4e3669d5239495a5201e157cd64b3ec98d23e1e520ee7bee8a0c91fe1017" Feb 28 05:01:56 crc kubenswrapper[5014]: I0228 05:01:56.172156 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:01:56 crc kubenswrapper[5014]: E0228 05:01:56.172499 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:02:00 crc kubenswrapper[5014]: I0228 05:02:00.155960 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537582-d887m"] Feb 28 05:02:00 crc kubenswrapper[5014]: E0228 05:02:00.157229 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f07906b1-fbe5-4a53-a0a6-a61230dc0e55" containerName="extract-utilities" Feb 28 05:02:00 crc kubenswrapper[5014]: I0228 05:02:00.157252 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f07906b1-fbe5-4a53-a0a6-a61230dc0e55" containerName="extract-utilities" Feb 28 05:02:00 crc kubenswrapper[5014]: E0228 05:02:00.157289 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f07906b1-fbe5-4a53-a0a6-a61230dc0e55" containerName="registry-server" Feb 28 05:02:00 crc kubenswrapper[5014]: I0228 05:02:00.157303 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f07906b1-fbe5-4a53-a0a6-a61230dc0e55" containerName="registry-server" Feb 28 05:02:00 crc kubenswrapper[5014]: E0228 05:02:00.157355 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f07906b1-fbe5-4a53-a0a6-a61230dc0e55" containerName="extract-content" Feb 28 05:02:00 crc kubenswrapper[5014]: I0228 05:02:00.157369 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f07906b1-fbe5-4a53-a0a6-a61230dc0e55" containerName="extract-content" Feb 28 05:02:00 crc kubenswrapper[5014]: I0228 05:02:00.157772 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="f07906b1-fbe5-4a53-a0a6-a61230dc0e55" containerName="registry-server" Feb 28 05:02:00 crc kubenswrapper[5014]: I0228 05:02:00.158884 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537582-d887m" Feb 28 05:02:00 crc kubenswrapper[5014]: I0228 05:02:00.162869 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:02:00 crc kubenswrapper[5014]: I0228 05:02:00.162940 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:02:00 crc kubenswrapper[5014]: I0228 05:02:00.162954 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:02:00 crc kubenswrapper[5014]: I0228 05:02:00.168664 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537582-d887m"] Feb 28 05:02:00 crc kubenswrapper[5014]: I0228 05:02:00.230966 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trjx9\" (UniqueName: \"kubernetes.io/projected/5df6daac-d482-491a-ab9a-10809fcbe91e-kube-api-access-trjx9\") pod \"auto-csr-approver-29537582-d887m\" (UID: \"5df6daac-d482-491a-ab9a-10809fcbe91e\") " pod="openshift-infra/auto-csr-approver-29537582-d887m" Feb 28 05:02:00 crc kubenswrapper[5014]: I0228 05:02:00.332197 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trjx9\" (UniqueName: \"kubernetes.io/projected/5df6daac-d482-491a-ab9a-10809fcbe91e-kube-api-access-trjx9\") pod \"auto-csr-approver-29537582-d887m\" (UID: \"5df6daac-d482-491a-ab9a-10809fcbe91e\") " pod="openshift-infra/auto-csr-approver-29537582-d887m" Feb 28 05:02:00 crc kubenswrapper[5014]: I0228 05:02:00.367090 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trjx9\" (UniqueName: \"kubernetes.io/projected/5df6daac-d482-491a-ab9a-10809fcbe91e-kube-api-access-trjx9\") pod \"auto-csr-approver-29537582-d887m\" (UID: \"5df6daac-d482-491a-ab9a-10809fcbe91e\") " pod="openshift-infra/auto-csr-approver-29537582-d887m" Feb 28 05:02:00 crc kubenswrapper[5014]: I0228 05:02:00.478081 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537582-d887m" Feb 28 05:02:00 crc kubenswrapper[5014]: I0228 05:02:00.937075 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537582-d887m"] Feb 28 05:02:01 crc kubenswrapper[5014]: I0228 05:02:01.360565 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537582-d887m" event={"ID":"5df6daac-d482-491a-ab9a-10809fcbe91e","Type":"ContainerStarted","Data":"c792f17e676e09d51906152987e49886d8afe9e3336e9a7acdd4302a67b2961c"} Feb 28 05:02:02 crc kubenswrapper[5014]: I0228 05:02:02.375407 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537582-d887m" event={"ID":"5df6daac-d482-491a-ab9a-10809fcbe91e","Type":"ContainerStarted","Data":"7f5ba6d85fe609b8c195c55a1b396c19eb252e2a9adc0b331d7bd4f698034a13"} Feb 28 05:02:02 crc kubenswrapper[5014]: I0228 05:02:02.396149 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537582-d887m" podStartSLOduration=1.386712635 podStartE2EDuration="2.396126138s" podCreationTimestamp="2026-02-28 05:02:00 +0000 UTC" firstStartedPulling="2026-02-28 05:02:00.941543934 +0000 UTC m=+1709.611669844" lastFinishedPulling="2026-02-28 05:02:01.950957437 +0000 UTC m=+1710.621083347" observedRunningTime="2026-02-28 05:02:02.392993822 +0000 UTC m=+1711.063119812" watchObservedRunningTime="2026-02-28 05:02:02.396126138 +0000 UTC m=+1711.066252048" Feb 28 05:02:03 crc kubenswrapper[5014]: I0228 05:02:03.388416 5014 generic.go:334] "Generic (PLEG): container finished" podID="5df6daac-d482-491a-ab9a-10809fcbe91e" containerID="7f5ba6d85fe609b8c195c55a1b396c19eb252e2a9adc0b331d7bd4f698034a13" exitCode=0 Feb 28 05:02:03 crc kubenswrapper[5014]: I0228 05:02:03.388577 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537582-d887m" event={"ID":"5df6daac-d482-491a-ab9a-10809fcbe91e","Type":"ContainerDied","Data":"7f5ba6d85fe609b8c195c55a1b396c19eb252e2a9adc0b331d7bd4f698034a13"} Feb 28 05:02:05 crc kubenswrapper[5014]: I0228 05:02:05.249641 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537582-d887m" Feb 28 05:02:05 crc kubenswrapper[5014]: I0228 05:02:05.337440 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trjx9\" (UniqueName: \"kubernetes.io/projected/5df6daac-d482-491a-ab9a-10809fcbe91e-kube-api-access-trjx9\") pod \"5df6daac-d482-491a-ab9a-10809fcbe91e\" (UID: \"5df6daac-d482-491a-ab9a-10809fcbe91e\") " Feb 28 05:02:05 crc kubenswrapper[5014]: I0228 05:02:05.342736 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5df6daac-d482-491a-ab9a-10809fcbe91e-kube-api-access-trjx9" (OuterVolumeSpecName: "kube-api-access-trjx9") pod "5df6daac-d482-491a-ab9a-10809fcbe91e" (UID: "5df6daac-d482-491a-ab9a-10809fcbe91e"). InnerVolumeSpecName "kube-api-access-trjx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:02:05 crc kubenswrapper[5014]: I0228 05:02:05.424587 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537582-d887m" event={"ID":"5df6daac-d482-491a-ab9a-10809fcbe91e","Type":"ContainerDied","Data":"c792f17e676e09d51906152987e49886d8afe9e3336e9a7acdd4302a67b2961c"} Feb 28 05:02:05 crc kubenswrapper[5014]: I0228 05:02:05.424631 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c792f17e676e09d51906152987e49886d8afe9e3336e9a7acdd4302a67b2961c" Feb 28 05:02:05 crc kubenswrapper[5014]: I0228 05:02:05.424674 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537582-d887m" Feb 28 05:02:05 crc kubenswrapper[5014]: I0228 05:02:05.439599 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trjx9\" (UniqueName: \"kubernetes.io/projected/5df6daac-d482-491a-ab9a-10809fcbe91e-kube-api-access-trjx9\") on node \"crc\" DevicePath \"\"" Feb 28 05:02:06 crc kubenswrapper[5014]: I0228 05:02:06.335032 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537576-mx47j"] Feb 28 05:02:06 crc kubenswrapper[5014]: I0228 05:02:06.344758 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537576-mx47j"] Feb 28 05:02:08 crc kubenswrapper[5014]: I0228 05:02:08.190186 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3702cbc-ce6b-4f93-9015-bd7cdc462025" path="/var/lib/kubelet/pods/a3702cbc-ce6b-4f93-9015-bd7cdc462025/volumes" Feb 28 05:02:10 crc kubenswrapper[5014]: I0228 05:02:10.172278 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:02:10 crc kubenswrapper[5014]: E0228 05:02:10.172722 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:02:25 crc kubenswrapper[5014]: I0228 05:02:25.171400 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:02:25 crc kubenswrapper[5014]: E0228 05:02:25.173118 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:02:27 crc kubenswrapper[5014]: I0228 05:02:27.045346 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3ded-account-create-update-jxfct"] Feb 28 05:02:27 crc kubenswrapper[5014]: I0228 05:02:27.059987 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-n2z8v"] Feb 28 05:02:27 crc kubenswrapper[5014]: I0228 05:02:27.072084 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-5f554"] Feb 28 05:02:27 crc kubenswrapper[5014]: I0228 05:02:27.081503 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3ded-account-create-update-jxfct"] Feb 28 05:02:27 crc kubenswrapper[5014]: I0228 05:02:27.089209 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-n2z8v"] Feb 28 05:02:27 crc kubenswrapper[5014]: I0228 05:02:27.097070 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-5f554"] Feb 28 05:02:28 crc kubenswrapper[5014]: I0228 05:02:28.035518 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3523-account-create-update-xd6xg"] Feb 28 05:02:28 crc kubenswrapper[5014]: I0228 05:02:28.048406 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-3523-account-create-update-xd6xg"] Feb 28 05:02:28 crc kubenswrapper[5014]: I0228 05:02:28.190702 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47438bcb-f130-4a0d-b000-fc61e91a5762" path="/var/lib/kubelet/pods/47438bcb-f130-4a0d-b000-fc61e91a5762/volumes" Feb 28 05:02:28 crc kubenswrapper[5014]: I0228 05:02:28.191497 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ee7b14b-72c2-44e4-9e19-5b3351c8adef" path="/var/lib/kubelet/pods/7ee7b14b-72c2-44e4-9e19-5b3351c8adef/volumes" Feb 28 05:02:28 crc kubenswrapper[5014]: I0228 05:02:28.192302 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7a56f69-c15e-45a1-9a37-a8a0d635f307" path="/var/lib/kubelet/pods/b7a56f69-c15e-45a1-9a37-a8a0d635f307/volumes" Feb 28 05:02:28 crc kubenswrapper[5014]: I0228 05:02:28.193039 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd179fc0-8f02-477b-88db-7f4e27bc5b5a" path="/var/lib/kubelet/pods/dd179fc0-8f02-477b-88db-7f4e27bc5b5a/volumes" Feb 28 05:02:30 crc kubenswrapper[5014]: I0228 05:02:30.035500 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-400e-account-create-update-xtzgq"] Feb 28 05:02:30 crc kubenswrapper[5014]: I0228 05:02:30.045413 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-jsf2v"] Feb 28 05:02:30 crc kubenswrapper[5014]: I0228 05:02:30.056948 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-400e-account-create-update-xtzgq"] Feb 28 05:02:30 crc kubenswrapper[5014]: I0228 05:02:30.070329 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-jsf2v"] Feb 28 05:02:30 crc kubenswrapper[5014]: I0228 05:02:30.190379 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e6957be-e258-44d6-b0d3-e1317a0310c1" path="/var/lib/kubelet/pods/3e6957be-e258-44d6-b0d3-e1317a0310c1/volumes" Feb 28 05:02:30 crc kubenswrapper[5014]: I0228 05:02:30.191613 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6efb968a-6151-439b-a324-e36d9c8b2dee" path="/var/lib/kubelet/pods/6efb968a-6151-439b-a324-e36d9c8b2dee/volumes" Feb 28 05:02:37 crc kubenswrapper[5014]: I0228 05:02:37.172018 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:02:37 crc kubenswrapper[5014]: E0228 05:02:37.172655 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:02:39 crc kubenswrapper[5014]: I0228 05:02:39.809139 5014 generic.go:334] "Generic (PLEG): container finished" podID="92c43e33-7947-4ad2-984a-e2618b76f368" containerID="bbc25012e3c32f3a0a26f0ee9952ec4f8ff10a50e47536dd635656a795922072" exitCode=0 Feb 28 05:02:39 crc kubenswrapper[5014]: I0228 05:02:39.809219 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq52b" event={"ID":"92c43e33-7947-4ad2-984a-e2618b76f368","Type":"ContainerDied","Data":"bbc25012e3c32f3a0a26f0ee9952ec4f8ff10a50e47536dd635656a795922072"} Feb 28 05:02:41 crc kubenswrapper[5014]: I0228 05:02:41.272434 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq52b" Feb 28 05:02:41 crc kubenswrapper[5014]: I0228 05:02:41.331247 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92c43e33-7947-4ad2-984a-e2618b76f368-inventory\") pod \"92c43e33-7947-4ad2-984a-e2618b76f368\" (UID: \"92c43e33-7947-4ad2-984a-e2618b76f368\") " Feb 28 05:02:41 crc kubenswrapper[5014]: I0228 05:02:41.331331 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqbxq\" (UniqueName: \"kubernetes.io/projected/92c43e33-7947-4ad2-984a-e2618b76f368-kube-api-access-sqbxq\") pod \"92c43e33-7947-4ad2-984a-e2618b76f368\" (UID: \"92c43e33-7947-4ad2-984a-e2618b76f368\") " Feb 28 05:02:41 crc kubenswrapper[5014]: I0228 05:02:41.331376 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92c43e33-7947-4ad2-984a-e2618b76f368-ssh-key-openstack-edpm-ipam\") pod \"92c43e33-7947-4ad2-984a-e2618b76f368\" (UID: \"92c43e33-7947-4ad2-984a-e2618b76f368\") " Feb 28 05:02:41 crc kubenswrapper[5014]: I0228 05:02:41.338096 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92c43e33-7947-4ad2-984a-e2618b76f368-kube-api-access-sqbxq" (OuterVolumeSpecName: "kube-api-access-sqbxq") pod "92c43e33-7947-4ad2-984a-e2618b76f368" (UID: "92c43e33-7947-4ad2-984a-e2618b76f368"). InnerVolumeSpecName "kube-api-access-sqbxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:02:41 crc kubenswrapper[5014]: I0228 05:02:41.360262 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92c43e33-7947-4ad2-984a-e2618b76f368-inventory" (OuterVolumeSpecName: "inventory") pod "92c43e33-7947-4ad2-984a-e2618b76f368" (UID: "92c43e33-7947-4ad2-984a-e2618b76f368"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:02:41 crc kubenswrapper[5014]: I0228 05:02:41.370930 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92c43e33-7947-4ad2-984a-e2618b76f368-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "92c43e33-7947-4ad2-984a-e2618b76f368" (UID: "92c43e33-7947-4ad2-984a-e2618b76f368"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:02:41 crc kubenswrapper[5014]: I0228 05:02:41.434162 5014 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/92c43e33-7947-4ad2-984a-e2618b76f368-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 05:02:41 crc kubenswrapper[5014]: I0228 05:02:41.434202 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqbxq\" (UniqueName: \"kubernetes.io/projected/92c43e33-7947-4ad2-984a-e2618b76f368-kube-api-access-sqbxq\") on node \"crc\" DevicePath \"\"" Feb 28 05:02:41 crc kubenswrapper[5014]: I0228 05:02:41.434212 5014 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/92c43e33-7947-4ad2-984a-e2618b76f368-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 05:02:41 crc kubenswrapper[5014]: I0228 05:02:41.844944 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq52b" event={"ID":"92c43e33-7947-4ad2-984a-e2618b76f368","Type":"ContainerDied","Data":"570ee847e6b36f0e94bb439081aaf20750eda45c94caaa99750c8d81c53e1711"} Feb 28 05:02:41 crc kubenswrapper[5014]: I0228 05:02:41.845242 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="570ee847e6b36f0e94bb439081aaf20750eda45c94caaa99750c8d81c53e1711" Feb 28 05:02:41 crc kubenswrapper[5014]: I0228 05:02:41.845025 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fq52b" Feb 28 05:02:41 crc kubenswrapper[5014]: I0228 05:02:41.938954 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9"] Feb 28 05:02:41 crc kubenswrapper[5014]: E0228 05:02:41.939468 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c43e33-7947-4ad2-984a-e2618b76f368" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 28 05:02:41 crc kubenswrapper[5014]: I0228 05:02:41.939490 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c43e33-7947-4ad2-984a-e2618b76f368" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 28 05:02:41 crc kubenswrapper[5014]: E0228 05:02:41.939514 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df6daac-d482-491a-ab9a-10809fcbe91e" containerName="oc" Feb 28 05:02:41 crc kubenswrapper[5014]: I0228 05:02:41.939522 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df6daac-d482-491a-ab9a-10809fcbe91e" containerName="oc" Feb 28 05:02:41 crc kubenswrapper[5014]: I0228 05:02:41.939743 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="92c43e33-7947-4ad2-984a-e2618b76f368" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 28 05:02:41 crc kubenswrapper[5014]: I0228 05:02:41.939757 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="5df6daac-d482-491a-ab9a-10809fcbe91e" containerName="oc" Feb 28 05:02:41 crc kubenswrapper[5014]: I0228 05:02:41.940546 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9" Feb 28 05:02:41 crc kubenswrapper[5014]: I0228 05:02:41.944323 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 05:02:41 crc kubenswrapper[5014]: I0228 05:02:41.944949 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 05:02:41 crc kubenswrapper[5014]: I0228 05:02:41.945562 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6dz6b" Feb 28 05:02:41 crc kubenswrapper[5014]: I0228 05:02:41.949349 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 05:02:41 crc kubenswrapper[5014]: I0228 05:02:41.953649 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9"] Feb 28 05:02:42 crc kubenswrapper[5014]: I0228 05:02:42.046095 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/169069f4-d382-4045-99a5-cf54af88ee18-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9\" (UID: \"169069f4-d382-4045-99a5-cf54af88ee18\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9" Feb 28 05:02:42 crc kubenswrapper[5014]: I0228 05:02:42.046252 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4rzl\" (UniqueName: \"kubernetes.io/projected/169069f4-d382-4045-99a5-cf54af88ee18-kube-api-access-v4rzl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9\" (UID: \"169069f4-d382-4045-99a5-cf54af88ee18\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9" Feb 28 05:02:42 crc kubenswrapper[5014]: I0228 05:02:42.046368 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/169069f4-d382-4045-99a5-cf54af88ee18-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9\" (UID: \"169069f4-d382-4045-99a5-cf54af88ee18\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9" Feb 28 05:02:42 crc kubenswrapper[5014]: I0228 05:02:42.148174 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4rzl\" (UniqueName: \"kubernetes.io/projected/169069f4-d382-4045-99a5-cf54af88ee18-kube-api-access-v4rzl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9\" (UID: \"169069f4-d382-4045-99a5-cf54af88ee18\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9" Feb 28 05:02:42 crc kubenswrapper[5014]: I0228 05:02:42.148438 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/169069f4-d382-4045-99a5-cf54af88ee18-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9\" (UID: \"169069f4-d382-4045-99a5-cf54af88ee18\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9" Feb 28 05:02:42 crc kubenswrapper[5014]: I0228 05:02:42.148525 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/169069f4-d382-4045-99a5-cf54af88ee18-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9\" (UID: \"169069f4-d382-4045-99a5-cf54af88ee18\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9" Feb 28 05:02:42 crc kubenswrapper[5014]: I0228 05:02:42.158779 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/169069f4-d382-4045-99a5-cf54af88ee18-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9\" (UID: \"169069f4-d382-4045-99a5-cf54af88ee18\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9" Feb 28 05:02:42 crc kubenswrapper[5014]: I0228 05:02:42.159089 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/169069f4-d382-4045-99a5-cf54af88ee18-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9\" (UID: \"169069f4-d382-4045-99a5-cf54af88ee18\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9" Feb 28 05:02:42 crc kubenswrapper[5014]: I0228 05:02:42.175891 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4rzl\" (UniqueName: \"kubernetes.io/projected/169069f4-d382-4045-99a5-cf54af88ee18-kube-api-access-v4rzl\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9\" (UID: \"169069f4-d382-4045-99a5-cf54af88ee18\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9" Feb 28 05:02:42 crc kubenswrapper[5014]: I0228 05:02:42.272389 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9" Feb 28 05:02:42 crc kubenswrapper[5014]: I0228 05:02:42.824445 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9"] Feb 28 05:02:42 crc kubenswrapper[5014]: I0228 05:02:42.856604 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9" event={"ID":"169069f4-d382-4045-99a5-cf54af88ee18","Type":"ContainerStarted","Data":"cc914d444fb33946ffbcf8e4a0011027a5275ff37b43da70de68da62957df782"} Feb 28 05:02:43 crc kubenswrapper[5014]: I0228 05:02:43.870092 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9" event={"ID":"169069f4-d382-4045-99a5-cf54af88ee18","Type":"ContainerStarted","Data":"4b513a34ec88c0cae601ac3ded1e7c089c1729b6c85169759994f1be690e0c6e"} Feb 28 05:02:43 crc kubenswrapper[5014]: I0228 05:02:43.908910 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9" podStartSLOduration=2.436650201 podStartE2EDuration="2.908883114s" podCreationTimestamp="2026-02-28 05:02:41 +0000 UTC" firstStartedPulling="2026-02-28 05:02:42.829607766 +0000 UTC m=+1751.499733676" lastFinishedPulling="2026-02-28 05:02:43.301840649 +0000 UTC m=+1751.971966589" observedRunningTime="2026-02-28 05:02:43.890868294 +0000 UTC m=+1752.560994214" watchObservedRunningTime="2026-02-28 05:02:43.908883114 +0000 UTC m=+1752.579009064" Feb 28 05:02:48 crc kubenswrapper[5014]: I0228 05:02:48.172647 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:02:48 crc kubenswrapper[5014]: E0228 05:02:48.173673 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:02:51 crc kubenswrapper[5014]: I0228 05:02:51.053928 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-np8j9"] Feb 28 05:02:51 crc kubenswrapper[5014]: I0228 05:02:51.065162 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-np8j9"] Feb 28 05:02:52 crc kubenswrapper[5014]: I0228 05:02:52.182404 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b06b7983-c1b9-433b-b3f7-31b07fe8df22" path="/var/lib/kubelet/pods/b06b7983-c1b9-433b-b3f7-31b07fe8df22/volumes" Feb 28 05:02:55 crc kubenswrapper[5014]: I0228 05:02:55.270338 5014 scope.go:117] "RemoveContainer" containerID="4460f1174c4ed73725408e55d6342a6bdac47f876e73b1cd67cd81e087589dbf" Feb 28 05:02:55 crc kubenswrapper[5014]: I0228 05:02:55.306609 5014 scope.go:117] "RemoveContainer" containerID="be2fa0324dbceeec1f1a48344693a1c48b87ea689ea2e6f070c6a45ed41953e8" Feb 28 05:02:55 crc kubenswrapper[5014]: I0228 05:02:55.369915 5014 scope.go:117] "RemoveContainer" containerID="45beb3fb190864d244f2e3d73b956fba32e04bcb19e0cb03a3e97a246d9047eb" Feb 28 05:02:55 crc kubenswrapper[5014]: I0228 05:02:55.405847 5014 scope.go:117] "RemoveContainer" containerID="a113d8261bf42819854fc91b840910fa4734009b3c0509bd87318623d24afbbb" Feb 28 05:02:55 crc kubenswrapper[5014]: I0228 05:02:55.439310 5014 scope.go:117] "RemoveContainer" containerID="1994779af0e7875777cd96c2afda74a553a72cba74da39df8d39eb135fe7d067" Feb 28 05:02:55 crc kubenswrapper[5014]: I0228 05:02:55.480310 5014 scope.go:117] "RemoveContainer" containerID="bf5c7f94241b8576990c9e23cb39a4105490fa24dc0f93456818da8fac53b60b" Feb 28 05:02:55 crc kubenswrapper[5014]: I0228 05:02:55.529869 5014 scope.go:117] "RemoveContainer" containerID="074da8711d8579fc6973bca64ac7f723d59c79406e212a6740aeb5e9ed872931" Feb 28 05:02:55 crc kubenswrapper[5014]: I0228 05:02:55.598098 5014 scope.go:117] "RemoveContainer" containerID="9cfe7cb9677c062ac380134d631692243311c8d628a166fce7c28671f1abec22" Feb 28 05:02:55 crc kubenswrapper[5014]: I0228 05:02:55.622443 5014 scope.go:117] "RemoveContainer" containerID="fddefa038a4ad353a98dce29b7ce157696f0942effef9f7c70434972964ef3f8" Feb 28 05:02:55 crc kubenswrapper[5014]: I0228 05:02:55.649172 5014 scope.go:117] "RemoveContainer" containerID="8e045b92cef9362d67b5d4ed98632aa9f63c689047ceff522638c5235d5ee134" Feb 28 05:02:57 crc kubenswrapper[5014]: I0228 05:02:57.088193 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-xhkk8"] Feb 28 05:02:57 crc kubenswrapper[5014]: I0228 05:02:57.109501 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-xhkk8"] Feb 28 05:02:57 crc kubenswrapper[5014]: I0228 05:02:57.123119 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-00cb-account-create-update-pz9ks"] Feb 28 05:02:57 crc kubenswrapper[5014]: I0228 05:02:57.134210 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-e8bd-account-create-update-wksxn"] Feb 28 05:02:57 crc kubenswrapper[5014]: I0228 05:02:57.141292 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-85af-account-create-update-r6zrh"] Feb 28 05:02:57 crc kubenswrapper[5014]: I0228 05:02:57.148381 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-dlt7p"] Feb 28 05:02:57 crc kubenswrapper[5014]: I0228 05:02:57.155040 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-e8bd-account-create-update-wksxn"] Feb 28 05:02:57 crc kubenswrapper[5014]: I0228 05:02:57.161556 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-00cb-account-create-update-pz9ks"] Feb 28 05:02:57 crc kubenswrapper[5014]: I0228 05:02:57.167856 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-dlt7p"] Feb 28 05:02:57 crc kubenswrapper[5014]: I0228 05:02:57.174987 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-85af-account-create-update-r6zrh"] Feb 28 05:02:57 crc kubenswrapper[5014]: I0228 05:02:57.182621 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-59z6v"] Feb 28 05:02:57 crc kubenswrapper[5014]: I0228 05:02:57.189353 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-59z6v"] Feb 28 05:02:58 crc kubenswrapper[5014]: I0228 05:02:58.190719 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5910fc74-7b13-4884-a7f8-27156a1e013c" path="/var/lib/kubelet/pods/5910fc74-7b13-4884-a7f8-27156a1e013c/volumes" Feb 28 05:02:58 crc kubenswrapper[5014]: I0228 05:02:58.192729 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fa3a83e-dbd3-4274-9267-d70f5d6d0c16" path="/var/lib/kubelet/pods/5fa3a83e-dbd3-4274-9267-d70f5d6d0c16/volumes" Feb 28 05:02:58 crc kubenswrapper[5014]: I0228 05:02:58.194281 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6694f8b3-f730-49e4-8fc1-55a39e5acf4d" path="/var/lib/kubelet/pods/6694f8b3-f730-49e4-8fc1-55a39e5acf4d/volumes" Feb 28 05:02:58 crc kubenswrapper[5014]: I0228 05:02:58.195446 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ddbcd84-162e-477c-8005-b8abee09ff21" path="/var/lib/kubelet/pods/6ddbcd84-162e-477c-8005-b8abee09ff21/volumes" Feb 28 05:02:58 crc kubenswrapper[5014]: I0228 05:02:58.197569 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8edf7176-4345-42b7-a018-1574b7fb86b8" path="/var/lib/kubelet/pods/8edf7176-4345-42b7-a018-1574b7fb86b8/volumes" Feb 28 05:02:58 crc kubenswrapper[5014]: I0228 05:02:58.198943 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b282ed90-fa26-48d8-bb49-98036e930eb4" path="/var/lib/kubelet/pods/b282ed90-fa26-48d8-bb49-98036e930eb4/volumes" Feb 28 05:03:02 crc kubenswrapper[5014]: I0228 05:03:02.177499 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:03:02 crc kubenswrapper[5014]: E0228 05:03:02.178067 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:03:10 crc kubenswrapper[5014]: I0228 05:03:10.046643 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-fz9cq"] Feb 28 05:03:10 crc kubenswrapper[5014]: I0228 05:03:10.056175 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-fz9cq"] Feb 28 05:03:10 crc kubenswrapper[5014]: I0228 05:03:10.190295 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94986f9e-b185-4fb3-98c1-6f02fbfc64e5" path="/var/lib/kubelet/pods/94986f9e-b185-4fb3-98c1-6f02fbfc64e5/volumes" Feb 28 05:03:12 crc kubenswrapper[5014]: I0228 05:03:12.038907 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-tz5jx"] Feb 28 05:03:12 crc kubenswrapper[5014]: I0228 05:03:12.046866 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-tz5jx"] Feb 28 05:03:12 crc kubenswrapper[5014]: I0228 05:03:12.190137 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d232d598-8b65-47f6-a5dc-9d77d37d9b80" path="/var/lib/kubelet/pods/d232d598-8b65-47f6-a5dc-9d77d37d9b80/volumes" Feb 28 05:03:15 crc kubenswrapper[5014]: I0228 05:03:15.171571 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:03:15 crc kubenswrapper[5014]: E0228 05:03:15.174149 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:03:30 crc kubenswrapper[5014]: I0228 05:03:30.172058 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:03:30 crc kubenswrapper[5014]: E0228 05:03:30.173000 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:03:42 crc kubenswrapper[5014]: I0228 05:03:42.178292 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:03:42 crc kubenswrapper[5014]: E0228 05:03:42.179151 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:03:46 crc kubenswrapper[5014]: I0228 05:03:46.677576 5014 generic.go:334] "Generic (PLEG): container finished" podID="169069f4-d382-4045-99a5-cf54af88ee18" containerID="4b513a34ec88c0cae601ac3ded1e7c089c1729b6c85169759994f1be690e0c6e" exitCode=0 Feb 28 05:03:46 crc kubenswrapper[5014]: I0228 05:03:46.677676 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9" event={"ID":"169069f4-d382-4045-99a5-cf54af88ee18","Type":"ContainerDied","Data":"4b513a34ec88c0cae601ac3ded1e7c089c1729b6c85169759994f1be690e0c6e"} Feb 28 05:03:48 crc kubenswrapper[5014]: I0228 05:03:48.118488 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9" Feb 28 05:03:48 crc kubenswrapper[5014]: I0228 05:03:48.297390 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/169069f4-d382-4045-99a5-cf54af88ee18-ssh-key-openstack-edpm-ipam\") pod \"169069f4-d382-4045-99a5-cf54af88ee18\" (UID: \"169069f4-d382-4045-99a5-cf54af88ee18\") " Feb 28 05:03:48 crc kubenswrapper[5014]: I0228 05:03:48.297637 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4rzl\" (UniqueName: \"kubernetes.io/projected/169069f4-d382-4045-99a5-cf54af88ee18-kube-api-access-v4rzl\") pod \"169069f4-d382-4045-99a5-cf54af88ee18\" (UID: \"169069f4-d382-4045-99a5-cf54af88ee18\") " Feb 28 05:03:48 crc kubenswrapper[5014]: I0228 05:03:48.297680 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/169069f4-d382-4045-99a5-cf54af88ee18-inventory\") pod \"169069f4-d382-4045-99a5-cf54af88ee18\" (UID: \"169069f4-d382-4045-99a5-cf54af88ee18\") " Feb 28 05:03:48 crc kubenswrapper[5014]: I0228 05:03:48.305112 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/169069f4-d382-4045-99a5-cf54af88ee18-kube-api-access-v4rzl" (OuterVolumeSpecName: "kube-api-access-v4rzl") pod "169069f4-d382-4045-99a5-cf54af88ee18" (UID: "169069f4-d382-4045-99a5-cf54af88ee18"). InnerVolumeSpecName "kube-api-access-v4rzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:03:48 crc kubenswrapper[5014]: I0228 05:03:48.325440 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/169069f4-d382-4045-99a5-cf54af88ee18-inventory" (OuterVolumeSpecName: "inventory") pod "169069f4-d382-4045-99a5-cf54af88ee18" (UID: "169069f4-d382-4045-99a5-cf54af88ee18"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:03:48 crc kubenswrapper[5014]: I0228 05:03:48.332892 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/169069f4-d382-4045-99a5-cf54af88ee18-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "169069f4-d382-4045-99a5-cf54af88ee18" (UID: "169069f4-d382-4045-99a5-cf54af88ee18"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:03:48 crc kubenswrapper[5014]: I0228 05:03:48.401023 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4rzl\" (UniqueName: \"kubernetes.io/projected/169069f4-d382-4045-99a5-cf54af88ee18-kube-api-access-v4rzl\") on node \"crc\" DevicePath \"\"" Feb 28 05:03:48 crc kubenswrapper[5014]: I0228 05:03:48.401062 5014 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/169069f4-d382-4045-99a5-cf54af88ee18-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 05:03:48 crc kubenswrapper[5014]: I0228 05:03:48.401076 5014 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/169069f4-d382-4045-99a5-cf54af88ee18-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 05:03:48 crc kubenswrapper[5014]: I0228 05:03:48.703624 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9" event={"ID":"169069f4-d382-4045-99a5-cf54af88ee18","Type":"ContainerDied","Data":"cc914d444fb33946ffbcf8e4a0011027a5275ff37b43da70de68da62957df782"} Feb 28 05:03:48 crc kubenswrapper[5014]: I0228 05:03:48.703951 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc914d444fb33946ffbcf8e4a0011027a5275ff37b43da70de68da62957df782" Feb 28 05:03:48 crc kubenswrapper[5014]: I0228 05:03:48.703718 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9" Feb 28 05:03:48 crc kubenswrapper[5014]: I0228 05:03:48.819014 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv"] Feb 28 05:03:48 crc kubenswrapper[5014]: E0228 05:03:48.819526 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="169069f4-d382-4045-99a5-cf54af88ee18" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 28 05:03:48 crc kubenswrapper[5014]: I0228 05:03:48.819550 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="169069f4-d382-4045-99a5-cf54af88ee18" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 28 05:03:48 crc kubenswrapper[5014]: I0228 05:03:48.819820 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="169069f4-d382-4045-99a5-cf54af88ee18" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 28 05:03:48 crc kubenswrapper[5014]: I0228 05:03:48.820625 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv" Feb 28 05:03:48 crc kubenswrapper[5014]: I0228 05:03:48.825236 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 05:03:48 crc kubenswrapper[5014]: I0228 05:03:48.825896 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 05:03:48 crc kubenswrapper[5014]: I0228 05:03:48.826132 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 05:03:48 crc kubenswrapper[5014]: I0228 05:03:48.829146 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6dz6b" Feb 28 05:03:48 crc kubenswrapper[5014]: I0228 05:03:48.840989 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv"] Feb 28 05:03:49 crc kubenswrapper[5014]: I0228 05:03:49.012487 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f59w2\" (UniqueName: \"kubernetes.io/projected/5551729e-bd25-4c6c-b3d6-24a339aeab5c-kube-api-access-f59w2\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv\" (UID: \"5551729e-bd25-4c6c-b3d6-24a339aeab5c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv" Feb 28 05:03:49 crc kubenswrapper[5014]: I0228 05:03:49.012768 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5551729e-bd25-4c6c-b3d6-24a339aeab5c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv\" (UID: \"5551729e-bd25-4c6c-b3d6-24a339aeab5c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv" Feb 28 05:03:49 crc kubenswrapper[5014]: I0228 05:03:49.012981 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5551729e-bd25-4c6c-b3d6-24a339aeab5c-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv\" (UID: \"5551729e-bd25-4c6c-b3d6-24a339aeab5c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv" Feb 28 05:03:49 crc kubenswrapper[5014]: I0228 05:03:49.114956 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5551729e-bd25-4c6c-b3d6-24a339aeab5c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv\" (UID: \"5551729e-bd25-4c6c-b3d6-24a339aeab5c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv" Feb 28 05:03:49 crc kubenswrapper[5014]: I0228 05:03:49.115049 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5551729e-bd25-4c6c-b3d6-24a339aeab5c-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv\" (UID: \"5551729e-bd25-4c6c-b3d6-24a339aeab5c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv" Feb 28 05:03:49 crc kubenswrapper[5014]: I0228 05:03:49.115124 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f59w2\" (UniqueName: \"kubernetes.io/projected/5551729e-bd25-4c6c-b3d6-24a339aeab5c-kube-api-access-f59w2\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv\" (UID: \"5551729e-bd25-4c6c-b3d6-24a339aeab5c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv" Feb 28 05:03:49 crc kubenswrapper[5014]: I0228 05:03:49.120281 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5551729e-bd25-4c6c-b3d6-24a339aeab5c-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv\" (UID: \"5551729e-bd25-4c6c-b3d6-24a339aeab5c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv" Feb 28 05:03:49 crc kubenswrapper[5014]: I0228 05:03:49.120598 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5551729e-bd25-4c6c-b3d6-24a339aeab5c-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv\" (UID: \"5551729e-bd25-4c6c-b3d6-24a339aeab5c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv" Feb 28 05:03:49 crc kubenswrapper[5014]: I0228 05:03:49.140280 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f59w2\" (UniqueName: \"kubernetes.io/projected/5551729e-bd25-4c6c-b3d6-24a339aeab5c-kube-api-access-f59w2\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv\" (UID: \"5551729e-bd25-4c6c-b3d6-24a339aeab5c\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv" Feb 28 05:03:49 crc kubenswrapper[5014]: I0228 05:03:49.149035 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv" Feb 28 05:03:49 crc kubenswrapper[5014]: I0228 05:03:49.703700 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv"] Feb 28 05:03:49 crc kubenswrapper[5014]: I0228 05:03:49.716161 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv" event={"ID":"5551729e-bd25-4c6c-b3d6-24a339aeab5c","Type":"ContainerStarted","Data":"56698d35938679ac99bbbf229801952fda6f5994ccf266327d853803f3622d70"} Feb 28 05:03:50 crc kubenswrapper[5014]: I0228 05:03:50.728069 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv" event={"ID":"5551729e-bd25-4c6c-b3d6-24a339aeab5c","Type":"ContainerStarted","Data":"aee81b8a15c88fcb0c776a66479f954440c0bd1f572b0f5af2dec01080ebecfd"} Feb 28 05:03:50 crc kubenswrapper[5014]: I0228 05:03:50.748997 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv" podStartSLOduration=2.292901536 podStartE2EDuration="2.748976409s" podCreationTimestamp="2026-02-28 05:03:48 +0000 UTC" firstStartedPulling="2026-02-28 05:03:49.705380493 +0000 UTC m=+1818.375506413" lastFinishedPulling="2026-02-28 05:03:50.161455336 +0000 UTC m=+1818.831581286" observedRunningTime="2026-02-28 05:03:50.743355656 +0000 UTC m=+1819.413481566" watchObservedRunningTime="2026-02-28 05:03:50.748976409 +0000 UTC m=+1819.419102329" Feb 28 05:03:53 crc kubenswrapper[5014]: I0228 05:03:53.173084 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:03:53 crc kubenswrapper[5014]: E0228 05:03:53.173857 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:03:54 crc kubenswrapper[5014]: I0228 05:03:54.778960 5014 generic.go:334] "Generic (PLEG): container finished" podID="5551729e-bd25-4c6c-b3d6-24a339aeab5c" containerID="aee81b8a15c88fcb0c776a66479f954440c0bd1f572b0f5af2dec01080ebecfd" exitCode=0 Feb 28 05:03:54 crc kubenswrapper[5014]: I0228 05:03:54.779018 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv" event={"ID":"5551729e-bd25-4c6c-b3d6-24a339aeab5c","Type":"ContainerDied","Data":"aee81b8a15c88fcb0c776a66479f954440c0bd1f572b0f5af2dec01080ebecfd"} Feb 28 05:03:55 crc kubenswrapper[5014]: I0228 05:03:55.039938 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-cmvws"] Feb 28 05:03:55 crc kubenswrapper[5014]: I0228 05:03:55.049421 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-cmvws"] Feb 28 05:03:55 crc kubenswrapper[5014]: I0228 05:03:55.838265 5014 scope.go:117] "RemoveContainer" containerID="f0d4a17f0725f933521ea7f8a5dff7c52378d5a2c722bdedbef4ef3f0cb77c82" Feb 28 05:03:55 crc kubenswrapper[5014]: I0228 05:03:55.886240 5014 scope.go:117] "RemoveContainer" containerID="4360b205468bbbcdfa98a2ff7d2e8c075e824fbd4f9ba9ce04a1685742c487f2" Feb 28 05:03:55 crc kubenswrapper[5014]: I0228 05:03:55.930364 5014 scope.go:117] "RemoveContainer" containerID="f44b6d11cc8abe8e734c8da218218685ee455b7f16e07f089dc3532f634cc34f" Feb 28 05:03:55 crc kubenswrapper[5014]: I0228 05:03:55.982603 5014 scope.go:117] "RemoveContainer" containerID="a3ba1f9d6c2aecf288a9e66a4321e73241e0b8862cd9f8511b263d2d494bea14" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.034116 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-7sqlf"] Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.046178 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-7sqlf"] Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.049148 5014 scope.go:117] "RemoveContainer" containerID="ceead62a11cec3d18ea3e806ba189b0f76d1ffeb85fbd2edeeb5f9ac23c786e5" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.106668 5014 scope.go:117] "RemoveContainer" containerID="ef52befa051782375bee581993518c9f6cc692c3909c8137b005222ad2a69211" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.142604 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.150242 5014 scope.go:117] "RemoveContainer" containerID="6db1111e5c9de99a1229fca0f4833c3a55d96903b83992a2002c68471f6854ba" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.171693 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5551729e-bd25-4c6c-b3d6-24a339aeab5c-ssh-key-openstack-edpm-ipam\") pod \"5551729e-bd25-4c6c-b3d6-24a339aeab5c\" (UID: \"5551729e-bd25-4c6c-b3d6-24a339aeab5c\") " Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.171875 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5551729e-bd25-4c6c-b3d6-24a339aeab5c-inventory\") pod \"5551729e-bd25-4c6c-b3d6-24a339aeab5c\" (UID: \"5551729e-bd25-4c6c-b3d6-24a339aeab5c\") " Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.171936 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f59w2\" (UniqueName: \"kubernetes.io/projected/5551729e-bd25-4c6c-b3d6-24a339aeab5c-kube-api-access-f59w2\") pod \"5551729e-bd25-4c6c-b3d6-24a339aeab5c\" (UID: \"5551729e-bd25-4c6c-b3d6-24a339aeab5c\") " Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.177742 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5551729e-bd25-4c6c-b3d6-24a339aeab5c-kube-api-access-f59w2" (OuterVolumeSpecName: "kube-api-access-f59w2") pod "5551729e-bd25-4c6c-b3d6-24a339aeab5c" (UID: "5551729e-bd25-4c6c-b3d6-24a339aeab5c"). InnerVolumeSpecName "kube-api-access-f59w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.180082 5014 scope.go:117] "RemoveContainer" containerID="d0a82c59ea00be18e303205194b256bdc9ef9541536c4aa13de12fb8aadfcf04" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.182162 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c0a6572-64f2-488b-9533-c04957535d16" path="/var/lib/kubelet/pods/8c0a6572-64f2-488b-9533-c04957535d16/volumes" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.182986 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1736698-f0bd-493f-a03e-dc1957763f1a" path="/var/lib/kubelet/pods/f1736698-f0bd-493f-a03e-dc1957763f1a/volumes" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.203284 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5551729e-bd25-4c6c-b3d6-24a339aeab5c-inventory" (OuterVolumeSpecName: "inventory") pod "5551729e-bd25-4c6c-b3d6-24a339aeab5c" (UID: "5551729e-bd25-4c6c-b3d6-24a339aeab5c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.214011 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5551729e-bd25-4c6c-b3d6-24a339aeab5c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "5551729e-bd25-4c6c-b3d6-24a339aeab5c" (UID: "5551729e-bd25-4c6c-b3d6-24a339aeab5c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.277343 5014 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5551729e-bd25-4c6c-b3d6-24a339aeab5c-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.277453 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f59w2\" (UniqueName: \"kubernetes.io/projected/5551729e-bd25-4c6c-b3d6-24a339aeab5c-kube-api-access-f59w2\") on node \"crc\" DevicePath \"\"" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.277496 5014 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/5551729e-bd25-4c6c-b3d6-24a339aeab5c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.801705 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv" event={"ID":"5551729e-bd25-4c6c-b3d6-24a339aeab5c","Type":"ContainerDied","Data":"56698d35938679ac99bbbf229801952fda6f5994ccf266327d853803f3622d70"} Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.802037 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56698d35938679ac99bbbf229801952fda6f5994ccf266327d853803f3622d70" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.801781 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.874634 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qvmfl"] Feb 28 05:03:56 crc kubenswrapper[5014]: E0228 05:03:56.875316 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5551729e-bd25-4c6c-b3d6-24a339aeab5c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.875335 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="5551729e-bd25-4c6c-b3d6-24a339aeab5c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.875593 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="5551729e-bd25-4c6c-b3d6-24a339aeab5c" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.876606 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qvmfl" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.879548 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.888389 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.890607 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ff06abc-551c-452e-8593-603fb882db21-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qvmfl\" (UID: \"2ff06abc-551c-452e-8593-603fb882db21\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qvmfl" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.890917 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzspj\" (UniqueName: \"kubernetes.io/projected/2ff06abc-551c-452e-8593-603fb882db21-kube-api-access-lzspj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qvmfl\" (UID: \"2ff06abc-551c-452e-8593-603fb882db21\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qvmfl" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.891057 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ff06abc-551c-452e-8593-603fb882db21-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qvmfl\" (UID: \"2ff06abc-551c-452e-8593-603fb882db21\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qvmfl" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.892119 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6dz6b" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.892119 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.896903 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qvmfl"] Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.991686 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzspj\" (UniqueName: \"kubernetes.io/projected/2ff06abc-551c-452e-8593-603fb882db21-kube-api-access-lzspj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qvmfl\" (UID: \"2ff06abc-551c-452e-8593-603fb882db21\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qvmfl" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.991738 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ff06abc-551c-452e-8593-603fb882db21-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qvmfl\" (UID: \"2ff06abc-551c-452e-8593-603fb882db21\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qvmfl" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.991788 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ff06abc-551c-452e-8593-603fb882db21-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qvmfl\" (UID: \"2ff06abc-551c-452e-8593-603fb882db21\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qvmfl" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.997872 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ff06abc-551c-452e-8593-603fb882db21-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qvmfl\" (UID: \"2ff06abc-551c-452e-8593-603fb882db21\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qvmfl" Feb 28 05:03:56 crc kubenswrapper[5014]: I0228 05:03:56.998073 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ff06abc-551c-452e-8593-603fb882db21-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qvmfl\" (UID: \"2ff06abc-551c-452e-8593-603fb882db21\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qvmfl" Feb 28 05:03:57 crc kubenswrapper[5014]: I0228 05:03:57.011638 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzspj\" (UniqueName: \"kubernetes.io/projected/2ff06abc-551c-452e-8593-603fb882db21-kube-api-access-lzspj\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-qvmfl\" (UID: \"2ff06abc-551c-452e-8593-603fb882db21\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qvmfl" Feb 28 05:03:57 crc kubenswrapper[5014]: I0228 05:03:57.239257 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qvmfl" Feb 28 05:03:57 crc kubenswrapper[5014]: W0228 05:03:57.852077 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ff06abc_551c_452e_8593_603fb882db21.slice/crio-580a747aa376a030b8f5e1806b212ab17649548718a8696481514f8c5f766997 WatchSource:0}: Error finding container 580a747aa376a030b8f5e1806b212ab17649548718a8696481514f8c5f766997: Status 404 returned error can't find the container with id 580a747aa376a030b8f5e1806b212ab17649548718a8696481514f8c5f766997 Feb 28 05:03:57 crc kubenswrapper[5014]: I0228 05:03:57.853613 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-qvmfl"] Feb 28 05:03:58 crc kubenswrapper[5014]: I0228 05:03:58.035573 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-5tgzd"] Feb 28 05:03:58 crc kubenswrapper[5014]: I0228 05:03:58.051381 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-5tgzd"] Feb 28 05:03:58 crc kubenswrapper[5014]: I0228 05:03:58.183184 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5e88418-60bd-44ee-8272-245ee92460c6" path="/var/lib/kubelet/pods/c5e88418-60bd-44ee-8272-245ee92460c6/volumes" Feb 28 05:03:58 crc kubenswrapper[5014]: I0228 05:03:58.824661 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qvmfl" event={"ID":"2ff06abc-551c-452e-8593-603fb882db21","Type":"ContainerStarted","Data":"0790836db01b48af085e41b8bfb447d74f01ca5e778fafbdae2dd6313170716a"} Feb 28 05:03:58 crc kubenswrapper[5014]: I0228 05:03:58.825061 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qvmfl" event={"ID":"2ff06abc-551c-452e-8593-603fb882db21","Type":"ContainerStarted","Data":"580a747aa376a030b8f5e1806b212ab17649548718a8696481514f8c5f766997"} Feb 28 05:03:58 crc kubenswrapper[5014]: I0228 05:03:58.844170 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qvmfl" podStartSLOduration=2.4203993280000002 podStartE2EDuration="2.844148581s" podCreationTimestamp="2026-02-28 05:03:56 +0000 UTC" firstStartedPulling="2026-02-28 05:03:57.854896894 +0000 UTC m=+1826.525022814" lastFinishedPulling="2026-02-28 05:03:58.278646157 +0000 UTC m=+1826.948772067" observedRunningTime="2026-02-28 05:03:58.841351414 +0000 UTC m=+1827.511477324" watchObservedRunningTime="2026-02-28 05:03:58.844148581 +0000 UTC m=+1827.514274491" Feb 28 05:04:00 crc kubenswrapper[5014]: I0228 05:04:00.145462 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537584-z2t67"] Feb 28 05:04:00 crc kubenswrapper[5014]: I0228 05:04:00.148088 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537584-z2t67" Feb 28 05:04:00 crc kubenswrapper[5014]: I0228 05:04:00.151592 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:04:00 crc kubenswrapper[5014]: I0228 05:04:00.152275 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:04:00 crc kubenswrapper[5014]: I0228 05:04:00.155949 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:04:00 crc kubenswrapper[5014]: I0228 05:04:00.161626 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537584-z2t67"] Feb 28 05:04:00 crc kubenswrapper[5014]: I0228 05:04:00.255221 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rfgw\" (UniqueName: \"kubernetes.io/projected/8b0b5e49-8417-4763-8c0b-18a1d0a3a503-kube-api-access-9rfgw\") pod \"auto-csr-approver-29537584-z2t67\" (UID: \"8b0b5e49-8417-4763-8c0b-18a1d0a3a503\") " pod="openshift-infra/auto-csr-approver-29537584-z2t67" Feb 28 05:04:00 crc kubenswrapper[5014]: I0228 05:04:00.358108 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rfgw\" (UniqueName: \"kubernetes.io/projected/8b0b5e49-8417-4763-8c0b-18a1d0a3a503-kube-api-access-9rfgw\") pod \"auto-csr-approver-29537584-z2t67\" (UID: \"8b0b5e49-8417-4763-8c0b-18a1d0a3a503\") " pod="openshift-infra/auto-csr-approver-29537584-z2t67" Feb 28 05:04:00 crc kubenswrapper[5014]: I0228 05:04:00.390307 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rfgw\" (UniqueName: \"kubernetes.io/projected/8b0b5e49-8417-4763-8c0b-18a1d0a3a503-kube-api-access-9rfgw\") pod \"auto-csr-approver-29537584-z2t67\" (UID: \"8b0b5e49-8417-4763-8c0b-18a1d0a3a503\") " pod="openshift-infra/auto-csr-approver-29537584-z2t67" Feb 28 05:04:00 crc kubenswrapper[5014]: I0228 05:04:00.472285 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537584-z2t67" Feb 28 05:04:00 crc kubenswrapper[5014]: W0228 05:04:00.937477 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b0b5e49_8417_4763_8c0b_18a1d0a3a503.slice/crio-0865e3519fdeacaf4e99b7786501efb028165488a22e763760f71a92230480ba WatchSource:0}: Error finding container 0865e3519fdeacaf4e99b7786501efb028165488a22e763760f71a92230480ba: Status 404 returned error can't find the container with id 0865e3519fdeacaf4e99b7786501efb028165488a22e763760f71a92230480ba Feb 28 05:04:00 crc kubenswrapper[5014]: I0228 05:04:00.940141 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537584-z2t67"] Feb 28 05:04:01 crc kubenswrapper[5014]: I0228 05:04:01.856139 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537584-z2t67" event={"ID":"8b0b5e49-8417-4763-8c0b-18a1d0a3a503","Type":"ContainerStarted","Data":"0865e3519fdeacaf4e99b7786501efb028165488a22e763760f71a92230480ba"} Feb 28 05:04:02 crc kubenswrapper[5014]: I0228 05:04:02.033956 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-wxq9x"] Feb 28 05:04:02 crc kubenswrapper[5014]: I0228 05:04:02.046036 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-wxq9x"] Feb 28 05:04:02 crc kubenswrapper[5014]: I0228 05:04:02.191293 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57f91015-35f5-486c-a88c-0a90f76724e5" path="/var/lib/kubelet/pods/57f91015-35f5-486c-a88c-0a90f76724e5/volumes" Feb 28 05:04:02 crc kubenswrapper[5014]: I0228 05:04:02.864926 5014 generic.go:334] "Generic (PLEG): container finished" podID="8b0b5e49-8417-4763-8c0b-18a1d0a3a503" containerID="b46cf6daf39ad8d094662e7786edbc88ce70a78a6647328f6a989e68163b54d4" exitCode=0 Feb 28 05:04:02 crc kubenswrapper[5014]: I0228 05:04:02.865000 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537584-z2t67" event={"ID":"8b0b5e49-8417-4763-8c0b-18a1d0a3a503","Type":"ContainerDied","Data":"b46cf6daf39ad8d094662e7786edbc88ce70a78a6647328f6a989e68163b54d4"} Feb 28 05:04:04 crc kubenswrapper[5014]: I0228 05:04:04.268767 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537584-z2t67" Feb 28 05:04:04 crc kubenswrapper[5014]: I0228 05:04:04.446352 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rfgw\" (UniqueName: \"kubernetes.io/projected/8b0b5e49-8417-4763-8c0b-18a1d0a3a503-kube-api-access-9rfgw\") pod \"8b0b5e49-8417-4763-8c0b-18a1d0a3a503\" (UID: \"8b0b5e49-8417-4763-8c0b-18a1d0a3a503\") " Feb 28 05:04:04 crc kubenswrapper[5014]: I0228 05:04:04.451898 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b0b5e49-8417-4763-8c0b-18a1d0a3a503-kube-api-access-9rfgw" (OuterVolumeSpecName: "kube-api-access-9rfgw") pod "8b0b5e49-8417-4763-8c0b-18a1d0a3a503" (UID: "8b0b5e49-8417-4763-8c0b-18a1d0a3a503"). InnerVolumeSpecName "kube-api-access-9rfgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:04:04 crc kubenswrapper[5014]: I0228 05:04:04.548469 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rfgw\" (UniqueName: \"kubernetes.io/projected/8b0b5e49-8417-4763-8c0b-18a1d0a3a503-kube-api-access-9rfgw\") on node \"crc\" DevicePath \"\"" Feb 28 05:04:04 crc kubenswrapper[5014]: I0228 05:04:04.887856 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537584-z2t67" event={"ID":"8b0b5e49-8417-4763-8c0b-18a1d0a3a503","Type":"ContainerDied","Data":"0865e3519fdeacaf4e99b7786501efb028165488a22e763760f71a92230480ba"} Feb 28 05:04:04 crc kubenswrapper[5014]: I0228 05:04:04.887896 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537584-z2t67" Feb 28 05:04:04 crc kubenswrapper[5014]: I0228 05:04:04.887950 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0865e3519fdeacaf4e99b7786501efb028165488a22e763760f71a92230480ba" Feb 28 05:04:05 crc kubenswrapper[5014]: I0228 05:04:05.335346 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537578-fvxfg"] Feb 28 05:04:05 crc kubenswrapper[5014]: I0228 05:04:05.345197 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537578-fvxfg"] Feb 28 05:04:06 crc kubenswrapper[5014]: I0228 05:04:06.184654 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ab3025f-a356-4183-9663-8a3c8290c265" path="/var/lib/kubelet/pods/0ab3025f-a356-4183-9663-8a3c8290c265/volumes" Feb 28 05:04:08 crc kubenswrapper[5014]: I0228 05:04:08.178319 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:04:08 crc kubenswrapper[5014]: E0228 05:04:08.178907 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:04:10 crc kubenswrapper[5014]: I0228 05:04:10.038263 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-c9b9j"] Feb 28 05:04:10 crc kubenswrapper[5014]: I0228 05:04:10.050995 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-c9b9j"] Feb 28 05:04:10 crc kubenswrapper[5014]: I0228 05:04:10.192853 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1688b2e2-1aaf-49e0-8414-0f12bb079aba" path="/var/lib/kubelet/pods/1688b2e2-1aaf-49e0-8414-0f12bb079aba/volumes" Feb 28 05:04:20 crc kubenswrapper[5014]: I0228 05:04:20.172355 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:04:20 crc kubenswrapper[5014]: E0228 05:04:20.176441 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:04:31 crc kubenswrapper[5014]: I0228 05:04:31.172582 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:04:31 crc kubenswrapper[5014]: E0228 05:04:31.173631 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:04:32 crc kubenswrapper[5014]: I0228 05:04:32.586119 5014 generic.go:334] "Generic (PLEG): container finished" podID="2ff06abc-551c-452e-8593-603fb882db21" containerID="0790836db01b48af085e41b8bfb447d74f01ca5e778fafbdae2dd6313170716a" exitCode=0 Feb 28 05:04:32 crc kubenswrapper[5014]: I0228 05:04:32.586192 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qvmfl" event={"ID":"2ff06abc-551c-452e-8593-603fb882db21","Type":"ContainerDied","Data":"0790836db01b48af085e41b8bfb447d74f01ca5e778fafbdae2dd6313170716a"} Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.003217 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qvmfl" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.157973 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzspj\" (UniqueName: \"kubernetes.io/projected/2ff06abc-551c-452e-8593-603fb882db21-kube-api-access-lzspj\") pod \"2ff06abc-551c-452e-8593-603fb882db21\" (UID: \"2ff06abc-551c-452e-8593-603fb882db21\") " Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.158077 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ff06abc-551c-452e-8593-603fb882db21-inventory\") pod \"2ff06abc-551c-452e-8593-603fb882db21\" (UID: \"2ff06abc-551c-452e-8593-603fb882db21\") " Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.158233 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ff06abc-551c-452e-8593-603fb882db21-ssh-key-openstack-edpm-ipam\") pod \"2ff06abc-551c-452e-8593-603fb882db21\" (UID: \"2ff06abc-551c-452e-8593-603fb882db21\") " Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.165010 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ff06abc-551c-452e-8593-603fb882db21-kube-api-access-lzspj" (OuterVolumeSpecName: "kube-api-access-lzspj") pod "2ff06abc-551c-452e-8593-603fb882db21" (UID: "2ff06abc-551c-452e-8593-603fb882db21"). InnerVolumeSpecName "kube-api-access-lzspj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.189747 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ff06abc-551c-452e-8593-603fb882db21-inventory" (OuterVolumeSpecName: "inventory") pod "2ff06abc-551c-452e-8593-603fb882db21" (UID: "2ff06abc-551c-452e-8593-603fb882db21"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.200531 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ff06abc-551c-452e-8593-603fb882db21-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2ff06abc-551c-452e-8593-603fb882db21" (UID: "2ff06abc-551c-452e-8593-603fb882db21"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.261043 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzspj\" (UniqueName: \"kubernetes.io/projected/2ff06abc-551c-452e-8593-603fb882db21-kube-api-access-lzspj\") on node \"crc\" DevicePath \"\"" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.261991 5014 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2ff06abc-551c-452e-8593-603fb882db21-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.262024 5014 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2ff06abc-551c-452e-8593-603fb882db21-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.603969 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qvmfl" event={"ID":"2ff06abc-551c-452e-8593-603fb882db21","Type":"ContainerDied","Data":"580a747aa376a030b8f5e1806b212ab17649548718a8696481514f8c5f766997"} Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.604011 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="580a747aa376a030b8f5e1806b212ab17649548718a8696481514f8c5f766997" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.604043 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-qvmfl" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.732911 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7cc87"] Feb 28 05:04:34 crc kubenswrapper[5014]: E0228 05:04:34.740064 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b0b5e49-8417-4763-8c0b-18a1d0a3a503" containerName="oc" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.740099 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b0b5e49-8417-4763-8c0b-18a1d0a3a503" containerName="oc" Feb 28 05:04:34 crc kubenswrapper[5014]: E0228 05:04:34.740121 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ff06abc-551c-452e-8593-603fb882db21" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.740128 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ff06abc-551c-452e-8593-603fb882db21" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.740287 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b0b5e49-8417-4763-8c0b-18a1d0a3a503" containerName="oc" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.740312 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ff06abc-551c-452e-8593-603fb882db21" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.740979 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7cc87" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.745103 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.745157 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.745900 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6dz6b" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.749098 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.750051 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7cc87"] Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.872947 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4vh9\" (UniqueName: \"kubernetes.io/projected/2cf2a283-e04c-4b99-978c-8e8261227a09-kube-api-access-r4vh9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7cc87\" (UID: \"2cf2a283-e04c-4b99-978c-8e8261227a09\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7cc87" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.873004 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cf2a283-e04c-4b99-978c-8e8261227a09-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7cc87\" (UID: \"2cf2a283-e04c-4b99-978c-8e8261227a09\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7cc87" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.873084 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cf2a283-e04c-4b99-978c-8e8261227a09-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7cc87\" (UID: \"2cf2a283-e04c-4b99-978c-8e8261227a09\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7cc87" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.974958 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4vh9\" (UniqueName: \"kubernetes.io/projected/2cf2a283-e04c-4b99-978c-8e8261227a09-kube-api-access-r4vh9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7cc87\" (UID: \"2cf2a283-e04c-4b99-978c-8e8261227a09\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7cc87" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.975044 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cf2a283-e04c-4b99-978c-8e8261227a09-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7cc87\" (UID: \"2cf2a283-e04c-4b99-978c-8e8261227a09\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7cc87" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.975145 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cf2a283-e04c-4b99-978c-8e8261227a09-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7cc87\" (UID: \"2cf2a283-e04c-4b99-978c-8e8261227a09\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7cc87" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.980665 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cf2a283-e04c-4b99-978c-8e8261227a09-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7cc87\" (UID: \"2cf2a283-e04c-4b99-978c-8e8261227a09\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7cc87" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.982424 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cf2a283-e04c-4b99-978c-8e8261227a09-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7cc87\" (UID: \"2cf2a283-e04c-4b99-978c-8e8261227a09\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7cc87" Feb 28 05:04:34 crc kubenswrapper[5014]: I0228 05:04:34.991762 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4vh9\" (UniqueName: \"kubernetes.io/projected/2cf2a283-e04c-4b99-978c-8e8261227a09-kube-api-access-r4vh9\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-7cc87\" (UID: \"2cf2a283-e04c-4b99-978c-8e8261227a09\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7cc87" Feb 28 05:04:35 crc kubenswrapper[5014]: I0228 05:04:35.066348 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7cc87" Feb 28 05:04:35 crc kubenswrapper[5014]: I0228 05:04:35.646693 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7cc87"] Feb 28 05:04:36 crc kubenswrapper[5014]: I0228 05:04:36.633389 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7cc87" event={"ID":"2cf2a283-e04c-4b99-978c-8e8261227a09","Type":"ContainerStarted","Data":"716f9f3c55f7bd354ef685b827676a0f84000e546fee88f9e8c4cb7310ee3071"} Feb 28 05:04:36 crc kubenswrapper[5014]: I0228 05:04:36.633935 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7cc87" event={"ID":"2cf2a283-e04c-4b99-978c-8e8261227a09","Type":"ContainerStarted","Data":"5aafe27017c433c2e72c55cd343b71fe2d7ad6da88d9ee3e1e9cb720dce2bbd6"} Feb 28 05:04:36 crc kubenswrapper[5014]: I0228 05:04:36.654401 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7cc87" podStartSLOduration=2.228535142 podStartE2EDuration="2.654379801s" podCreationTimestamp="2026-02-28 05:04:34 +0000 UTC" firstStartedPulling="2026-02-28 05:04:35.671192221 +0000 UTC m=+1864.341318151" lastFinishedPulling="2026-02-28 05:04:36.0970369 +0000 UTC m=+1864.767162810" observedRunningTime="2026-02-28 05:04:36.653088356 +0000 UTC m=+1865.323214266" watchObservedRunningTime="2026-02-28 05:04:36.654379801 +0000 UTC m=+1865.324505731" Feb 28 05:04:37 crc kubenswrapper[5014]: I0228 05:04:37.056962 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-dwb75"] Feb 28 05:04:37 crc kubenswrapper[5014]: I0228 05:04:37.065227 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-q8snd"] Feb 28 05:04:37 crc kubenswrapper[5014]: I0228 05:04:37.071799 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-3ea3-account-create-update-h5bnp"] Feb 28 05:04:37 crc kubenswrapper[5014]: I0228 05:04:37.078898 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-jg75p"] Feb 28 05:04:37 crc kubenswrapper[5014]: I0228 05:04:37.086545 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-dwb75"] Feb 28 05:04:37 crc kubenswrapper[5014]: I0228 05:04:37.099063 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-3ea3-account-create-update-h5bnp"] Feb 28 05:04:37 crc kubenswrapper[5014]: I0228 05:04:37.114187 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-q8snd"] Feb 28 05:04:37 crc kubenswrapper[5014]: I0228 05:04:37.127466 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-jg75p"] Feb 28 05:04:38 crc kubenswrapper[5014]: I0228 05:04:38.037795 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-ceba-account-create-update-cjxpn"] Feb 28 05:04:38 crc kubenswrapper[5014]: I0228 05:04:38.049332 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-36df-account-create-update-wx84t"] Feb 28 05:04:38 crc kubenswrapper[5014]: I0228 05:04:38.059074 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-ceba-account-create-update-cjxpn"] Feb 28 05:04:38 crc kubenswrapper[5014]: I0228 05:04:38.069003 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-36df-account-create-update-wx84t"] Feb 28 05:04:38 crc kubenswrapper[5014]: I0228 05:04:38.188716 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00673aaf-5abc-4e06-91dd-8a1d71a5e726" path="/var/lib/kubelet/pods/00673aaf-5abc-4e06-91dd-8a1d71a5e726/volumes" Feb 28 05:04:38 crc kubenswrapper[5014]: I0228 05:04:38.189633 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fd00cf3-841a-4ecc-b28c-8ba9d6d00894" path="/var/lib/kubelet/pods/0fd00cf3-841a-4ecc-b28c-8ba9d6d00894/volumes" Feb 28 05:04:38 crc kubenswrapper[5014]: I0228 05:04:38.190440 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d9ce69c-9aeb-4120-9abb-d052b56ff801" path="/var/lib/kubelet/pods/5d9ce69c-9aeb-4120-9abb-d052b56ff801/volumes" Feb 28 05:04:38 crc kubenswrapper[5014]: I0228 05:04:38.191204 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bf06a59-bec2-4829-bf19-65ed9856d251" path="/var/lib/kubelet/pods/7bf06a59-bec2-4829-bf19-65ed9856d251/volumes" Feb 28 05:04:38 crc kubenswrapper[5014]: I0228 05:04:38.192577 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1d0cd80-c46f-4f36-904f-ce3128cc997f" path="/var/lib/kubelet/pods/c1d0cd80-c46f-4f36-904f-ce3128cc997f/volumes" Feb 28 05:04:38 crc kubenswrapper[5014]: I0228 05:04:38.193368 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5ba47cb-6efc-46ac-97df-b895cac925a3" path="/var/lib/kubelet/pods/d5ba47cb-6efc-46ac-97df-b895cac925a3/volumes" Feb 28 05:04:46 crc kubenswrapper[5014]: I0228 05:04:46.173467 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:04:46 crc kubenswrapper[5014]: E0228 05:04:46.174498 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:04:56 crc kubenswrapper[5014]: I0228 05:04:56.343301 5014 scope.go:117] "RemoveContainer" containerID="888a50ebc86cbb9c2fa123861d89d1e67c733c3751e4c8a0e65f5b6ff951cd7e" Feb 28 05:04:56 crc kubenswrapper[5014]: I0228 05:04:56.402785 5014 scope.go:117] "RemoveContainer" containerID="a76224304bda64d72ab7c220934912e2a34af3b6b9aeda4b555afbea41665c59" Feb 28 05:04:56 crc kubenswrapper[5014]: I0228 05:04:56.434604 5014 scope.go:117] "RemoveContainer" containerID="e46198fde7de213649a0d2fa670fb8f6b899c1c190ad6f15fa59734b0d4103c0" Feb 28 05:04:56 crc kubenswrapper[5014]: I0228 05:04:56.539998 5014 scope.go:117] "RemoveContainer" containerID="8016fc329a3bc5f93c6c5cbd601e686f8dc401cc77617429b7e4fd657f12ffc9" Feb 28 05:04:56 crc kubenswrapper[5014]: I0228 05:04:56.637979 5014 scope.go:117] "RemoveContainer" containerID="820b2372de6b6cbb263f873004c058b9242e9297ac2dd9d2e297a9a9ebd46155" Feb 28 05:04:56 crc kubenswrapper[5014]: I0228 05:04:56.692950 5014 scope.go:117] "RemoveContainer" containerID="9aaa24f7f9b70636e0bbcf691ed857f8069615926ecea96387c1c3af532343a1" Feb 28 05:04:56 crc kubenswrapper[5014]: I0228 05:04:56.721482 5014 scope.go:117] "RemoveContainer" containerID="3df83a2367c173a24ad720e7edd8074d1b7150abc920d9d6a5ead167db2f5ba1" Feb 28 05:04:56 crc kubenswrapper[5014]: I0228 05:04:56.737425 5014 scope.go:117] "RemoveContainer" containerID="0ce85616c56a19ae32cdd705e5c4072145e7fad184d9161924b12988e54e9122" Feb 28 05:04:56 crc kubenswrapper[5014]: I0228 05:04:56.777443 5014 scope.go:117] "RemoveContainer" containerID="af276ef5b9532b8cc1167c9b701c2b045d70ec21a6d0476994c1cb54664be2e8" Feb 28 05:04:56 crc kubenswrapper[5014]: I0228 05:04:56.805179 5014 scope.go:117] "RemoveContainer" containerID="7943bd947cd43ddf77c62e9460ccfabe22c48e60eb5f82fe013071110b88514c" Feb 28 05:04:56 crc kubenswrapper[5014]: I0228 05:04:56.843125 5014 scope.go:117] "RemoveContainer" containerID="5bd27d3174a96a3ec9c5fdc4c8c0b5229913fe405521d965c5cdf1addbfbcf56" Feb 28 05:04:56 crc kubenswrapper[5014]: I0228 05:04:56.862971 5014 scope.go:117] "RemoveContainer" containerID="2e8c9e659725b7c3cfeb4a686cc1ebfeb6a49d0f4102098b4925a7e7d1aa3aaa" Feb 28 05:05:01 crc kubenswrapper[5014]: I0228 05:05:01.172980 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:05:01 crc kubenswrapper[5014]: E0228 05:05:01.174487 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:05:10 crc kubenswrapper[5014]: I0228 05:05:10.054316 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-txmbr"] Feb 28 05:05:10 crc kubenswrapper[5014]: I0228 05:05:10.078608 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-txmbr"] Feb 28 05:05:10 crc kubenswrapper[5014]: I0228 05:05:10.189724 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69e68ab2-dae2-4ebe-9820-c945a9897363" path="/var/lib/kubelet/pods/69e68ab2-dae2-4ebe-9820-c945a9897363/volumes" Feb 28 05:05:16 crc kubenswrapper[5014]: I0228 05:05:16.171876 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:05:16 crc kubenswrapper[5014]: E0228 05:05:16.172500 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:05:23 crc kubenswrapper[5014]: I0228 05:05:23.492485 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-blfkj"] Feb 28 05:05:23 crc kubenswrapper[5014]: I0228 05:05:23.497250 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blfkj" Feb 28 05:05:23 crc kubenswrapper[5014]: I0228 05:05:23.510742 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-blfkj"] Feb 28 05:05:23 crc kubenswrapper[5014]: I0228 05:05:23.569358 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4edfdc2b-038c-49c4-ab74-2b2f402457c9-utilities\") pod \"redhat-operators-blfkj\" (UID: \"4edfdc2b-038c-49c4-ab74-2b2f402457c9\") " pod="openshift-marketplace/redhat-operators-blfkj" Feb 28 05:05:23 crc kubenswrapper[5014]: I0228 05:05:23.569493 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh8tk\" (UniqueName: \"kubernetes.io/projected/4edfdc2b-038c-49c4-ab74-2b2f402457c9-kube-api-access-zh8tk\") pod \"redhat-operators-blfkj\" (UID: \"4edfdc2b-038c-49c4-ab74-2b2f402457c9\") " pod="openshift-marketplace/redhat-operators-blfkj" Feb 28 05:05:23 crc kubenswrapper[5014]: I0228 05:05:23.569568 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4edfdc2b-038c-49c4-ab74-2b2f402457c9-catalog-content\") pod \"redhat-operators-blfkj\" (UID: \"4edfdc2b-038c-49c4-ab74-2b2f402457c9\") " pod="openshift-marketplace/redhat-operators-blfkj" Feb 28 05:05:23 crc kubenswrapper[5014]: I0228 05:05:23.671464 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh8tk\" (UniqueName: \"kubernetes.io/projected/4edfdc2b-038c-49c4-ab74-2b2f402457c9-kube-api-access-zh8tk\") pod \"redhat-operators-blfkj\" (UID: \"4edfdc2b-038c-49c4-ab74-2b2f402457c9\") " pod="openshift-marketplace/redhat-operators-blfkj" Feb 28 05:05:23 crc kubenswrapper[5014]: I0228 05:05:23.671637 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4edfdc2b-038c-49c4-ab74-2b2f402457c9-catalog-content\") pod \"redhat-operators-blfkj\" (UID: \"4edfdc2b-038c-49c4-ab74-2b2f402457c9\") " pod="openshift-marketplace/redhat-operators-blfkj" Feb 28 05:05:23 crc kubenswrapper[5014]: I0228 05:05:23.671737 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4edfdc2b-038c-49c4-ab74-2b2f402457c9-utilities\") pod \"redhat-operators-blfkj\" (UID: \"4edfdc2b-038c-49c4-ab74-2b2f402457c9\") " pod="openshift-marketplace/redhat-operators-blfkj" Feb 28 05:05:23 crc kubenswrapper[5014]: I0228 05:05:23.672396 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4edfdc2b-038c-49c4-ab74-2b2f402457c9-utilities\") pod \"redhat-operators-blfkj\" (UID: \"4edfdc2b-038c-49c4-ab74-2b2f402457c9\") " pod="openshift-marketplace/redhat-operators-blfkj" Feb 28 05:05:23 crc kubenswrapper[5014]: I0228 05:05:23.672506 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4edfdc2b-038c-49c4-ab74-2b2f402457c9-catalog-content\") pod \"redhat-operators-blfkj\" (UID: \"4edfdc2b-038c-49c4-ab74-2b2f402457c9\") " pod="openshift-marketplace/redhat-operators-blfkj" Feb 28 05:05:23 crc kubenswrapper[5014]: I0228 05:05:23.693587 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh8tk\" (UniqueName: \"kubernetes.io/projected/4edfdc2b-038c-49c4-ab74-2b2f402457c9-kube-api-access-zh8tk\") pod \"redhat-operators-blfkj\" (UID: \"4edfdc2b-038c-49c4-ab74-2b2f402457c9\") " pod="openshift-marketplace/redhat-operators-blfkj" Feb 28 05:05:23 crc kubenswrapper[5014]: I0228 05:05:23.833321 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blfkj" Feb 28 05:05:24 crc kubenswrapper[5014]: I0228 05:05:24.107328 5014 generic.go:334] "Generic (PLEG): container finished" podID="2cf2a283-e04c-4b99-978c-8e8261227a09" containerID="716f9f3c55f7bd354ef685b827676a0f84000e546fee88f9e8c4cb7310ee3071" exitCode=0 Feb 28 05:05:24 crc kubenswrapper[5014]: I0228 05:05:24.107417 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7cc87" event={"ID":"2cf2a283-e04c-4b99-978c-8e8261227a09","Type":"ContainerDied","Data":"716f9f3c55f7bd354ef685b827676a0f84000e546fee88f9e8c4cb7310ee3071"} Feb 28 05:05:24 crc kubenswrapper[5014]: I0228 05:05:24.281889 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-blfkj"] Feb 28 05:05:25 crc kubenswrapper[5014]: I0228 05:05:25.123757 5014 generic.go:334] "Generic (PLEG): container finished" podID="4edfdc2b-038c-49c4-ab74-2b2f402457c9" containerID="ea1ae05609027626c73e8b8fd4bf9b701dae4594a5266e8ee8fd73291cff9084" exitCode=0 Feb 28 05:05:25 crc kubenswrapper[5014]: I0228 05:05:25.124058 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blfkj" event={"ID":"4edfdc2b-038c-49c4-ab74-2b2f402457c9","Type":"ContainerDied","Data":"ea1ae05609027626c73e8b8fd4bf9b701dae4594a5266e8ee8fd73291cff9084"} Feb 28 05:05:25 crc kubenswrapper[5014]: I0228 05:05:25.127300 5014 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 05:05:25 crc kubenswrapper[5014]: I0228 05:05:25.128061 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blfkj" event={"ID":"4edfdc2b-038c-49c4-ab74-2b2f402457c9","Type":"ContainerStarted","Data":"cc719284cde4a477cf96385f021114bb9226808cfb8a9ef1f48ef8ce25f93ae2"} Feb 28 05:05:25 crc kubenswrapper[5014]: I0228 05:05:25.600764 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7cc87" Feb 28 05:05:25 crc kubenswrapper[5014]: I0228 05:05:25.712405 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cf2a283-e04c-4b99-978c-8e8261227a09-inventory\") pod \"2cf2a283-e04c-4b99-978c-8e8261227a09\" (UID: \"2cf2a283-e04c-4b99-978c-8e8261227a09\") " Feb 28 05:05:25 crc kubenswrapper[5014]: I0228 05:05:25.712552 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cf2a283-e04c-4b99-978c-8e8261227a09-ssh-key-openstack-edpm-ipam\") pod \"2cf2a283-e04c-4b99-978c-8e8261227a09\" (UID: \"2cf2a283-e04c-4b99-978c-8e8261227a09\") " Feb 28 05:05:25 crc kubenswrapper[5014]: I0228 05:05:25.712579 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4vh9\" (UniqueName: \"kubernetes.io/projected/2cf2a283-e04c-4b99-978c-8e8261227a09-kube-api-access-r4vh9\") pod \"2cf2a283-e04c-4b99-978c-8e8261227a09\" (UID: \"2cf2a283-e04c-4b99-978c-8e8261227a09\") " Feb 28 05:05:25 crc kubenswrapper[5014]: I0228 05:05:25.720551 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cf2a283-e04c-4b99-978c-8e8261227a09-kube-api-access-r4vh9" (OuterVolumeSpecName: "kube-api-access-r4vh9") pod "2cf2a283-e04c-4b99-978c-8e8261227a09" (UID: "2cf2a283-e04c-4b99-978c-8e8261227a09"). InnerVolumeSpecName "kube-api-access-r4vh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:05:25 crc kubenswrapper[5014]: I0228 05:05:25.746012 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf2a283-e04c-4b99-978c-8e8261227a09-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2cf2a283-e04c-4b99-978c-8e8261227a09" (UID: "2cf2a283-e04c-4b99-978c-8e8261227a09"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:05:25 crc kubenswrapper[5014]: I0228 05:05:25.770835 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cf2a283-e04c-4b99-978c-8e8261227a09-inventory" (OuterVolumeSpecName: "inventory") pod "2cf2a283-e04c-4b99-978c-8e8261227a09" (UID: "2cf2a283-e04c-4b99-978c-8e8261227a09"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:05:25 crc kubenswrapper[5014]: I0228 05:05:25.819278 5014 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2cf2a283-e04c-4b99-978c-8e8261227a09-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 05:05:25 crc kubenswrapper[5014]: I0228 05:05:25.819327 5014 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2cf2a283-e04c-4b99-978c-8e8261227a09-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 05:05:25 crc kubenswrapper[5014]: I0228 05:05:25.819342 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4vh9\" (UniqueName: \"kubernetes.io/projected/2cf2a283-e04c-4b99-978c-8e8261227a09-kube-api-access-r4vh9\") on node \"crc\" DevicePath \"\"" Feb 28 05:05:26 crc kubenswrapper[5014]: I0228 05:05:26.138599 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7cc87" Feb 28 05:05:26 crc kubenswrapper[5014]: I0228 05:05:26.138597 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-7cc87" event={"ID":"2cf2a283-e04c-4b99-978c-8e8261227a09","Type":"ContainerDied","Data":"5aafe27017c433c2e72c55cd343b71fe2d7ad6da88d9ee3e1e9cb720dce2bbd6"} Feb 28 05:05:26 crc kubenswrapper[5014]: I0228 05:05:26.138765 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aafe27017c433c2e72c55cd343b71fe2d7ad6da88d9ee3e1e9cb720dce2bbd6" Feb 28 05:05:26 crc kubenswrapper[5014]: I0228 05:05:26.142172 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blfkj" event={"ID":"4edfdc2b-038c-49c4-ab74-2b2f402457c9","Type":"ContainerStarted","Data":"58f4e8fef84140917cdf5a650974c7f2d8e5c755048557c6a76cdbdee10cb49b"} Feb 28 05:05:26 crc kubenswrapper[5014]: I0228 05:05:26.249950 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-shsr9"] Feb 28 05:05:26 crc kubenswrapper[5014]: E0228 05:05:26.250299 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cf2a283-e04c-4b99-978c-8e8261227a09" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 28 05:05:26 crc kubenswrapper[5014]: I0228 05:05:26.250315 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cf2a283-e04c-4b99-978c-8e8261227a09" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 28 05:05:26 crc kubenswrapper[5014]: I0228 05:05:26.250503 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cf2a283-e04c-4b99-978c-8e8261227a09" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 28 05:05:26 crc kubenswrapper[5014]: I0228 05:05:26.251168 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-shsr9" Feb 28 05:05:26 crc kubenswrapper[5014]: I0228 05:05:26.257415 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 05:05:26 crc kubenswrapper[5014]: I0228 05:05:26.257869 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6dz6b" Feb 28 05:05:26 crc kubenswrapper[5014]: I0228 05:05:26.258132 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 05:05:26 crc kubenswrapper[5014]: I0228 05:05:26.258370 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 05:05:26 crc kubenswrapper[5014]: I0228 05:05:26.264767 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-shsr9"] Feb 28 05:05:26 crc kubenswrapper[5014]: I0228 05:05:26.334339 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ld48\" (UniqueName: \"kubernetes.io/projected/fd843e16-57f4-412b-aeec-d22b9609502f-kube-api-access-2ld48\") pod \"ssh-known-hosts-edpm-deployment-shsr9\" (UID: \"fd843e16-57f4-412b-aeec-d22b9609502f\") " pod="openstack/ssh-known-hosts-edpm-deployment-shsr9" Feb 28 05:05:26 crc kubenswrapper[5014]: I0228 05:05:26.334468 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd843e16-57f4-412b-aeec-d22b9609502f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-shsr9\" (UID: \"fd843e16-57f4-412b-aeec-d22b9609502f\") " pod="openstack/ssh-known-hosts-edpm-deployment-shsr9" Feb 28 05:05:26 crc kubenswrapper[5014]: I0228 05:05:26.334526 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fd843e16-57f4-412b-aeec-d22b9609502f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-shsr9\" (UID: \"fd843e16-57f4-412b-aeec-d22b9609502f\") " pod="openstack/ssh-known-hosts-edpm-deployment-shsr9" Feb 28 05:05:26 crc kubenswrapper[5014]: I0228 05:05:26.436724 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fd843e16-57f4-412b-aeec-d22b9609502f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-shsr9\" (UID: \"fd843e16-57f4-412b-aeec-d22b9609502f\") " pod="openstack/ssh-known-hosts-edpm-deployment-shsr9" Feb 28 05:05:26 crc kubenswrapper[5014]: I0228 05:05:26.437400 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ld48\" (UniqueName: \"kubernetes.io/projected/fd843e16-57f4-412b-aeec-d22b9609502f-kube-api-access-2ld48\") pod \"ssh-known-hosts-edpm-deployment-shsr9\" (UID: \"fd843e16-57f4-412b-aeec-d22b9609502f\") " pod="openstack/ssh-known-hosts-edpm-deployment-shsr9" Feb 28 05:05:26 crc kubenswrapper[5014]: I0228 05:05:26.437508 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd843e16-57f4-412b-aeec-d22b9609502f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-shsr9\" (UID: \"fd843e16-57f4-412b-aeec-d22b9609502f\") " pod="openstack/ssh-known-hosts-edpm-deployment-shsr9" Feb 28 05:05:26 crc kubenswrapper[5014]: I0228 05:05:26.445383 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd843e16-57f4-412b-aeec-d22b9609502f-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-shsr9\" (UID: \"fd843e16-57f4-412b-aeec-d22b9609502f\") " pod="openstack/ssh-known-hosts-edpm-deployment-shsr9" Feb 28 05:05:26 crc kubenswrapper[5014]: I0228 05:05:26.448563 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fd843e16-57f4-412b-aeec-d22b9609502f-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-shsr9\" (UID: \"fd843e16-57f4-412b-aeec-d22b9609502f\") " pod="openstack/ssh-known-hosts-edpm-deployment-shsr9" Feb 28 05:05:26 crc kubenswrapper[5014]: I0228 05:05:26.466818 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ld48\" (UniqueName: \"kubernetes.io/projected/fd843e16-57f4-412b-aeec-d22b9609502f-kube-api-access-2ld48\") pod \"ssh-known-hosts-edpm-deployment-shsr9\" (UID: \"fd843e16-57f4-412b-aeec-d22b9609502f\") " pod="openstack/ssh-known-hosts-edpm-deployment-shsr9" Feb 28 05:05:26 crc kubenswrapper[5014]: I0228 05:05:26.580082 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-shsr9" Feb 28 05:05:27 crc kubenswrapper[5014]: I0228 05:05:27.117174 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-shsr9"] Feb 28 05:05:27 crc kubenswrapper[5014]: I0228 05:05:27.153371 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-shsr9" event={"ID":"fd843e16-57f4-412b-aeec-d22b9609502f","Type":"ContainerStarted","Data":"243c7e65687ce3c64b77d65986531bca7dd07117009f3272024d6c97e4a314b5"} Feb 28 05:05:28 crc kubenswrapper[5014]: I0228 05:05:28.165067 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-shsr9" event={"ID":"fd843e16-57f4-412b-aeec-d22b9609502f","Type":"ContainerStarted","Data":"407d59cdb2dce64e1c50ff6d0bcc289ac76d17e06eb174197dc02fdc6de6372c"} Feb 28 05:05:28 crc kubenswrapper[5014]: I0228 05:05:28.199476 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-shsr9" podStartSLOduration=1.796886494 podStartE2EDuration="2.19945009s" podCreationTimestamp="2026-02-28 05:05:26 +0000 UTC" firstStartedPulling="2026-02-28 05:05:27.116974175 +0000 UTC m=+1915.787100085" lastFinishedPulling="2026-02-28 05:05:27.519537761 +0000 UTC m=+1916.189663681" observedRunningTime="2026-02-28 05:05:28.184527644 +0000 UTC m=+1916.854653594" watchObservedRunningTime="2026-02-28 05:05:28.19945009 +0000 UTC m=+1916.869576040" Feb 28 05:05:29 crc kubenswrapper[5014]: I0228 05:05:29.177655 5014 generic.go:334] "Generic (PLEG): container finished" podID="4edfdc2b-038c-49c4-ab74-2b2f402457c9" containerID="58f4e8fef84140917cdf5a650974c7f2d8e5c755048557c6a76cdbdee10cb49b" exitCode=0 Feb 28 05:05:29 crc kubenswrapper[5014]: I0228 05:05:29.177718 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blfkj" event={"ID":"4edfdc2b-038c-49c4-ab74-2b2f402457c9","Type":"ContainerDied","Data":"58f4e8fef84140917cdf5a650974c7f2d8e5c755048557c6a76cdbdee10cb49b"} Feb 28 05:05:30 crc kubenswrapper[5014]: I0228 05:05:30.171499 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:05:30 crc kubenswrapper[5014]: E0228 05:05:30.172096 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:05:30 crc kubenswrapper[5014]: I0228 05:05:30.187743 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blfkj" event={"ID":"4edfdc2b-038c-49c4-ab74-2b2f402457c9","Type":"ContainerStarted","Data":"bc208e5f89d00a01bf98c4d7615fa0b0ea715def684d63cef2e6e7b40c9fa7dc"} Feb 28 05:05:30 crc kubenswrapper[5014]: I0228 05:05:30.220754 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-blfkj" podStartSLOduration=2.761139695 podStartE2EDuration="7.220732108s" podCreationTimestamp="2026-02-28 05:05:23 +0000 UTC" firstStartedPulling="2026-02-28 05:05:25.126733344 +0000 UTC m=+1913.796859294" lastFinishedPulling="2026-02-28 05:05:29.586325787 +0000 UTC m=+1918.256451707" observedRunningTime="2026-02-28 05:05:30.213366407 +0000 UTC m=+1918.883492357" watchObservedRunningTime="2026-02-28 05:05:30.220732108 +0000 UTC m=+1918.890858018" Feb 28 05:05:32 crc kubenswrapper[5014]: I0228 05:05:32.058225 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lp5x6"] Feb 28 05:05:32 crc kubenswrapper[5014]: I0228 05:05:32.072890 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-c82cz"] Feb 28 05:05:32 crc kubenswrapper[5014]: I0228 05:05:32.082433 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-c82cz"] Feb 28 05:05:32 crc kubenswrapper[5014]: I0228 05:05:32.094294 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lp5x6"] Feb 28 05:05:32 crc kubenswrapper[5014]: I0228 05:05:32.184558 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c2c7b5d-d778-4d96-a6fb-171203f594d8" path="/var/lib/kubelet/pods/3c2c7b5d-d778-4d96-a6fb-171203f594d8/volumes" Feb 28 05:05:32 crc kubenswrapper[5014]: I0228 05:05:32.185337 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e1c9f9-6a89-4ff4-8075-7d737bd42ec5" path="/var/lib/kubelet/pods/81e1c9f9-6a89-4ff4-8075-7d737bd42ec5/volumes" Feb 28 05:05:33 crc kubenswrapper[5014]: I0228 05:05:33.833855 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-blfkj" Feb 28 05:05:33 crc kubenswrapper[5014]: I0228 05:05:33.834516 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-blfkj" Feb 28 05:05:34 crc kubenswrapper[5014]: I0228 05:05:34.914119 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-blfkj" podUID="4edfdc2b-038c-49c4-ab74-2b2f402457c9" containerName="registry-server" probeResult="failure" output=< Feb 28 05:05:34 crc kubenswrapper[5014]: timeout: failed to connect service ":50051" within 1s Feb 28 05:05:34 crc kubenswrapper[5014]: > Feb 28 05:05:35 crc kubenswrapper[5014]: I0228 05:05:35.232970 5014 generic.go:334] "Generic (PLEG): container finished" podID="fd843e16-57f4-412b-aeec-d22b9609502f" containerID="407d59cdb2dce64e1c50ff6d0bcc289ac76d17e06eb174197dc02fdc6de6372c" exitCode=0 Feb 28 05:05:35 crc kubenswrapper[5014]: I0228 05:05:35.233020 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-shsr9" event={"ID":"fd843e16-57f4-412b-aeec-d22b9609502f","Type":"ContainerDied","Data":"407d59cdb2dce64e1c50ff6d0bcc289ac76d17e06eb174197dc02fdc6de6372c"} Feb 28 05:05:36 crc kubenswrapper[5014]: I0228 05:05:36.701148 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-shsr9" Feb 28 05:05:36 crc kubenswrapper[5014]: I0228 05:05:36.739138 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd843e16-57f4-412b-aeec-d22b9609502f-ssh-key-openstack-edpm-ipam\") pod \"fd843e16-57f4-412b-aeec-d22b9609502f\" (UID: \"fd843e16-57f4-412b-aeec-d22b9609502f\") " Feb 28 05:05:36 crc kubenswrapper[5014]: I0228 05:05:36.739273 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fd843e16-57f4-412b-aeec-d22b9609502f-inventory-0\") pod \"fd843e16-57f4-412b-aeec-d22b9609502f\" (UID: \"fd843e16-57f4-412b-aeec-d22b9609502f\") " Feb 28 05:05:36 crc kubenswrapper[5014]: I0228 05:05:36.739341 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ld48\" (UniqueName: \"kubernetes.io/projected/fd843e16-57f4-412b-aeec-d22b9609502f-kube-api-access-2ld48\") pod \"fd843e16-57f4-412b-aeec-d22b9609502f\" (UID: \"fd843e16-57f4-412b-aeec-d22b9609502f\") " Feb 28 05:05:36 crc kubenswrapper[5014]: I0228 05:05:36.747150 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd843e16-57f4-412b-aeec-d22b9609502f-kube-api-access-2ld48" (OuterVolumeSpecName: "kube-api-access-2ld48") pod "fd843e16-57f4-412b-aeec-d22b9609502f" (UID: "fd843e16-57f4-412b-aeec-d22b9609502f"). InnerVolumeSpecName "kube-api-access-2ld48". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:05:36 crc kubenswrapper[5014]: I0228 05:05:36.775013 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd843e16-57f4-412b-aeec-d22b9609502f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fd843e16-57f4-412b-aeec-d22b9609502f" (UID: "fd843e16-57f4-412b-aeec-d22b9609502f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:05:36 crc kubenswrapper[5014]: I0228 05:05:36.782984 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd843e16-57f4-412b-aeec-d22b9609502f-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "fd843e16-57f4-412b-aeec-d22b9609502f" (UID: "fd843e16-57f4-412b-aeec-d22b9609502f"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:05:36 crc kubenswrapper[5014]: I0228 05:05:36.840859 5014 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd843e16-57f4-412b-aeec-d22b9609502f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 05:05:36 crc kubenswrapper[5014]: I0228 05:05:36.840889 5014 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/fd843e16-57f4-412b-aeec-d22b9609502f-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 28 05:05:36 crc kubenswrapper[5014]: I0228 05:05:36.840899 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ld48\" (UniqueName: \"kubernetes.io/projected/fd843e16-57f4-412b-aeec-d22b9609502f-kube-api-access-2ld48\") on node \"crc\" DevicePath \"\"" Feb 28 05:05:37 crc kubenswrapper[5014]: I0228 05:05:37.257995 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-shsr9" event={"ID":"fd843e16-57f4-412b-aeec-d22b9609502f","Type":"ContainerDied","Data":"243c7e65687ce3c64b77d65986531bca7dd07117009f3272024d6c97e4a314b5"} Feb 28 05:05:37 crc kubenswrapper[5014]: I0228 05:05:37.258044 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="243c7e65687ce3c64b77d65986531bca7dd07117009f3272024d6c97e4a314b5" Feb 28 05:05:37 crc kubenswrapper[5014]: I0228 05:05:37.258021 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-shsr9" Feb 28 05:05:37 crc kubenswrapper[5014]: I0228 05:05:37.365376 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mbxvz"] Feb 28 05:05:37 crc kubenswrapper[5014]: E0228 05:05:37.365986 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd843e16-57f4-412b-aeec-d22b9609502f" containerName="ssh-known-hosts-edpm-deployment" Feb 28 05:05:37 crc kubenswrapper[5014]: I0228 05:05:37.366007 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd843e16-57f4-412b-aeec-d22b9609502f" containerName="ssh-known-hosts-edpm-deployment" Feb 28 05:05:37 crc kubenswrapper[5014]: I0228 05:05:37.366454 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd843e16-57f4-412b-aeec-d22b9609502f" containerName="ssh-known-hosts-edpm-deployment" Feb 28 05:05:37 crc kubenswrapper[5014]: I0228 05:05:37.367446 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mbxvz" Feb 28 05:05:37 crc kubenswrapper[5014]: I0228 05:05:37.380564 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mbxvz"] Feb 28 05:05:37 crc kubenswrapper[5014]: I0228 05:05:37.381058 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 05:05:37 crc kubenswrapper[5014]: I0228 05:05:37.381381 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 05:05:37 crc kubenswrapper[5014]: I0228 05:05:37.381838 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6dz6b" Feb 28 05:05:37 crc kubenswrapper[5014]: I0228 05:05:37.381993 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 05:05:37 crc kubenswrapper[5014]: I0228 05:05:37.551436 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d570627-429c-4a9c-a45a-55d652968c46-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mbxvz\" (UID: \"3d570627-429c-4a9c-a45a-55d652968c46\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mbxvz" Feb 28 05:05:37 crc kubenswrapper[5014]: I0228 05:05:37.551512 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szrdk\" (UniqueName: \"kubernetes.io/projected/3d570627-429c-4a9c-a45a-55d652968c46-kube-api-access-szrdk\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mbxvz\" (UID: \"3d570627-429c-4a9c-a45a-55d652968c46\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mbxvz" Feb 28 05:05:37 crc kubenswrapper[5014]: I0228 05:05:37.551576 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d570627-429c-4a9c-a45a-55d652968c46-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mbxvz\" (UID: \"3d570627-429c-4a9c-a45a-55d652968c46\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mbxvz" Feb 28 05:05:37 crc kubenswrapper[5014]: I0228 05:05:37.653043 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d570627-429c-4a9c-a45a-55d652968c46-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mbxvz\" (UID: \"3d570627-429c-4a9c-a45a-55d652968c46\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mbxvz" Feb 28 05:05:37 crc kubenswrapper[5014]: I0228 05:05:37.653406 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szrdk\" (UniqueName: \"kubernetes.io/projected/3d570627-429c-4a9c-a45a-55d652968c46-kube-api-access-szrdk\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mbxvz\" (UID: \"3d570627-429c-4a9c-a45a-55d652968c46\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mbxvz" Feb 28 05:05:37 crc kubenswrapper[5014]: I0228 05:05:37.653487 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d570627-429c-4a9c-a45a-55d652968c46-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mbxvz\" (UID: \"3d570627-429c-4a9c-a45a-55d652968c46\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mbxvz" Feb 28 05:05:37 crc kubenswrapper[5014]: I0228 05:05:37.656916 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d570627-429c-4a9c-a45a-55d652968c46-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mbxvz\" (UID: \"3d570627-429c-4a9c-a45a-55d652968c46\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mbxvz" Feb 28 05:05:37 crc kubenswrapper[5014]: I0228 05:05:37.657197 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d570627-429c-4a9c-a45a-55d652968c46-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mbxvz\" (UID: \"3d570627-429c-4a9c-a45a-55d652968c46\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mbxvz" Feb 28 05:05:37 crc kubenswrapper[5014]: I0228 05:05:37.672981 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szrdk\" (UniqueName: \"kubernetes.io/projected/3d570627-429c-4a9c-a45a-55d652968c46-kube-api-access-szrdk\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-mbxvz\" (UID: \"3d570627-429c-4a9c-a45a-55d652968c46\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mbxvz" Feb 28 05:05:37 crc kubenswrapper[5014]: I0228 05:05:37.699514 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mbxvz" Feb 28 05:05:38 crc kubenswrapper[5014]: I0228 05:05:38.217728 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-mbxvz"] Feb 28 05:05:38 crc kubenswrapper[5014]: I0228 05:05:38.266853 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mbxvz" event={"ID":"3d570627-429c-4a9c-a45a-55d652968c46","Type":"ContainerStarted","Data":"a88dd9c98fb617f31ae105967c0607fe1efe8dc52e4f383e50060bc014d556e3"} Feb 28 05:05:39 crc kubenswrapper[5014]: I0228 05:05:39.275548 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mbxvz" event={"ID":"3d570627-429c-4a9c-a45a-55d652968c46","Type":"ContainerStarted","Data":"d0721e997cffaf80544adf455228004976f339e695e7278d865e493236c708fa"} Feb 28 05:05:39 crc kubenswrapper[5014]: I0228 05:05:39.297097 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mbxvz" podStartSLOduration=1.873014065 podStartE2EDuration="2.297073606s" podCreationTimestamp="2026-02-28 05:05:37 +0000 UTC" firstStartedPulling="2026-02-28 05:05:38.230352049 +0000 UTC m=+1926.900477969" lastFinishedPulling="2026-02-28 05:05:38.6544116 +0000 UTC m=+1927.324537510" observedRunningTime="2026-02-28 05:05:39.290054794 +0000 UTC m=+1927.960180704" watchObservedRunningTime="2026-02-28 05:05:39.297073606 +0000 UTC m=+1927.967199516" Feb 28 05:05:43 crc kubenswrapper[5014]: I0228 05:05:43.918361 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-blfkj" Feb 28 05:05:44 crc kubenswrapper[5014]: I0228 05:05:44.020411 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-blfkj" Feb 28 05:05:44 crc kubenswrapper[5014]: I0228 05:05:44.184137 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-blfkj"] Feb 28 05:05:45 crc kubenswrapper[5014]: I0228 05:05:45.171966 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:05:45 crc kubenswrapper[5014]: E0228 05:05:45.173002 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:05:45 crc kubenswrapper[5014]: I0228 05:05:45.333314 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-blfkj" podUID="4edfdc2b-038c-49c4-ab74-2b2f402457c9" containerName="registry-server" containerID="cri-o://bc208e5f89d00a01bf98c4d7615fa0b0ea715def684d63cef2e6e7b40c9fa7dc" gracePeriod=2 Feb 28 05:05:45 crc kubenswrapper[5014]: I0228 05:05:45.847247 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blfkj" Feb 28 05:05:45 crc kubenswrapper[5014]: I0228 05:05:45.923227 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4edfdc2b-038c-49c4-ab74-2b2f402457c9-utilities\") pod \"4edfdc2b-038c-49c4-ab74-2b2f402457c9\" (UID: \"4edfdc2b-038c-49c4-ab74-2b2f402457c9\") " Feb 28 05:05:45 crc kubenswrapper[5014]: I0228 05:05:45.923303 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh8tk\" (UniqueName: \"kubernetes.io/projected/4edfdc2b-038c-49c4-ab74-2b2f402457c9-kube-api-access-zh8tk\") pod \"4edfdc2b-038c-49c4-ab74-2b2f402457c9\" (UID: \"4edfdc2b-038c-49c4-ab74-2b2f402457c9\") " Feb 28 05:05:45 crc kubenswrapper[5014]: I0228 05:05:45.923404 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4edfdc2b-038c-49c4-ab74-2b2f402457c9-catalog-content\") pod \"4edfdc2b-038c-49c4-ab74-2b2f402457c9\" (UID: \"4edfdc2b-038c-49c4-ab74-2b2f402457c9\") " Feb 28 05:05:45 crc kubenswrapper[5014]: I0228 05:05:45.926528 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4edfdc2b-038c-49c4-ab74-2b2f402457c9-utilities" (OuterVolumeSpecName: "utilities") pod "4edfdc2b-038c-49c4-ab74-2b2f402457c9" (UID: "4edfdc2b-038c-49c4-ab74-2b2f402457c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:05:45 crc kubenswrapper[5014]: I0228 05:05:45.931016 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4edfdc2b-038c-49c4-ab74-2b2f402457c9-kube-api-access-zh8tk" (OuterVolumeSpecName: "kube-api-access-zh8tk") pod "4edfdc2b-038c-49c4-ab74-2b2f402457c9" (UID: "4edfdc2b-038c-49c4-ab74-2b2f402457c9"). InnerVolumeSpecName "kube-api-access-zh8tk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:05:46 crc kubenswrapper[5014]: I0228 05:05:46.026051 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4edfdc2b-038c-49c4-ab74-2b2f402457c9-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 05:05:46 crc kubenswrapper[5014]: I0228 05:05:46.026080 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zh8tk\" (UniqueName: \"kubernetes.io/projected/4edfdc2b-038c-49c4-ab74-2b2f402457c9-kube-api-access-zh8tk\") on node \"crc\" DevicePath \"\"" Feb 28 05:05:46 crc kubenswrapper[5014]: I0228 05:05:46.053373 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4edfdc2b-038c-49c4-ab74-2b2f402457c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4edfdc2b-038c-49c4-ab74-2b2f402457c9" (UID: "4edfdc2b-038c-49c4-ab74-2b2f402457c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:05:46 crc kubenswrapper[5014]: I0228 05:05:46.126992 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4edfdc2b-038c-49c4-ab74-2b2f402457c9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 05:05:46 crc kubenswrapper[5014]: I0228 05:05:46.350025 5014 generic.go:334] "Generic (PLEG): container finished" podID="4edfdc2b-038c-49c4-ab74-2b2f402457c9" containerID="bc208e5f89d00a01bf98c4d7615fa0b0ea715def684d63cef2e6e7b40c9fa7dc" exitCode=0 Feb 28 05:05:46 crc kubenswrapper[5014]: I0228 05:05:46.350131 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-blfkj" Feb 28 05:05:46 crc kubenswrapper[5014]: I0228 05:05:46.350114 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blfkj" event={"ID":"4edfdc2b-038c-49c4-ab74-2b2f402457c9","Type":"ContainerDied","Data":"bc208e5f89d00a01bf98c4d7615fa0b0ea715def684d63cef2e6e7b40c9fa7dc"} Feb 28 05:05:46 crc kubenswrapper[5014]: I0228 05:05:46.351158 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-blfkj" event={"ID":"4edfdc2b-038c-49c4-ab74-2b2f402457c9","Type":"ContainerDied","Data":"cc719284cde4a477cf96385f021114bb9226808cfb8a9ef1f48ef8ce25f93ae2"} Feb 28 05:05:46 crc kubenswrapper[5014]: I0228 05:05:46.351197 5014 scope.go:117] "RemoveContainer" containerID="bc208e5f89d00a01bf98c4d7615fa0b0ea715def684d63cef2e6e7b40c9fa7dc" Feb 28 05:05:46 crc kubenswrapper[5014]: I0228 05:05:46.433220 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-blfkj"] Feb 28 05:05:46 crc kubenswrapper[5014]: I0228 05:05:46.440975 5014 scope.go:117] "RemoveContainer" containerID="58f4e8fef84140917cdf5a650974c7f2d8e5c755048557c6a76cdbdee10cb49b" Feb 28 05:05:46 crc kubenswrapper[5014]: I0228 05:05:46.448032 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-blfkj"] Feb 28 05:05:46 crc kubenswrapper[5014]: I0228 05:05:46.473715 5014 scope.go:117] "RemoveContainer" containerID="ea1ae05609027626c73e8b8fd4bf9b701dae4594a5266e8ee8fd73291cff9084" Feb 28 05:05:46 crc kubenswrapper[5014]: I0228 05:05:46.553170 5014 scope.go:117] "RemoveContainer" containerID="bc208e5f89d00a01bf98c4d7615fa0b0ea715def684d63cef2e6e7b40c9fa7dc" Feb 28 05:05:46 crc kubenswrapper[5014]: E0228 05:05:46.554394 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc208e5f89d00a01bf98c4d7615fa0b0ea715def684d63cef2e6e7b40c9fa7dc\": container with ID starting with bc208e5f89d00a01bf98c4d7615fa0b0ea715def684d63cef2e6e7b40c9fa7dc not found: ID does not exist" containerID="bc208e5f89d00a01bf98c4d7615fa0b0ea715def684d63cef2e6e7b40c9fa7dc" Feb 28 05:05:46 crc kubenswrapper[5014]: I0228 05:05:46.554473 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc208e5f89d00a01bf98c4d7615fa0b0ea715def684d63cef2e6e7b40c9fa7dc"} err="failed to get container status \"bc208e5f89d00a01bf98c4d7615fa0b0ea715def684d63cef2e6e7b40c9fa7dc\": rpc error: code = NotFound desc = could not find container \"bc208e5f89d00a01bf98c4d7615fa0b0ea715def684d63cef2e6e7b40c9fa7dc\": container with ID starting with bc208e5f89d00a01bf98c4d7615fa0b0ea715def684d63cef2e6e7b40c9fa7dc not found: ID does not exist" Feb 28 05:05:46 crc kubenswrapper[5014]: I0228 05:05:46.554516 5014 scope.go:117] "RemoveContainer" containerID="58f4e8fef84140917cdf5a650974c7f2d8e5c755048557c6a76cdbdee10cb49b" Feb 28 05:05:46 crc kubenswrapper[5014]: E0228 05:05:46.555192 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58f4e8fef84140917cdf5a650974c7f2d8e5c755048557c6a76cdbdee10cb49b\": container with ID starting with 58f4e8fef84140917cdf5a650974c7f2d8e5c755048557c6a76cdbdee10cb49b not found: ID does not exist" containerID="58f4e8fef84140917cdf5a650974c7f2d8e5c755048557c6a76cdbdee10cb49b" Feb 28 05:05:46 crc kubenswrapper[5014]: I0228 05:05:46.555247 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58f4e8fef84140917cdf5a650974c7f2d8e5c755048557c6a76cdbdee10cb49b"} err="failed to get container status \"58f4e8fef84140917cdf5a650974c7f2d8e5c755048557c6a76cdbdee10cb49b\": rpc error: code = NotFound desc = could not find container \"58f4e8fef84140917cdf5a650974c7f2d8e5c755048557c6a76cdbdee10cb49b\": container with ID starting with 58f4e8fef84140917cdf5a650974c7f2d8e5c755048557c6a76cdbdee10cb49b not found: ID does not exist" Feb 28 05:05:46 crc kubenswrapper[5014]: I0228 05:05:46.555279 5014 scope.go:117] "RemoveContainer" containerID="ea1ae05609027626c73e8b8fd4bf9b701dae4594a5266e8ee8fd73291cff9084" Feb 28 05:05:46 crc kubenswrapper[5014]: E0228 05:05:46.555619 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea1ae05609027626c73e8b8fd4bf9b701dae4594a5266e8ee8fd73291cff9084\": container with ID starting with ea1ae05609027626c73e8b8fd4bf9b701dae4594a5266e8ee8fd73291cff9084 not found: ID does not exist" containerID="ea1ae05609027626c73e8b8fd4bf9b701dae4594a5266e8ee8fd73291cff9084" Feb 28 05:05:46 crc kubenswrapper[5014]: I0228 05:05:46.555658 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea1ae05609027626c73e8b8fd4bf9b701dae4594a5266e8ee8fd73291cff9084"} err="failed to get container status \"ea1ae05609027626c73e8b8fd4bf9b701dae4594a5266e8ee8fd73291cff9084\": rpc error: code = NotFound desc = could not find container \"ea1ae05609027626c73e8b8fd4bf9b701dae4594a5266e8ee8fd73291cff9084\": container with ID starting with ea1ae05609027626c73e8b8fd4bf9b701dae4594a5266e8ee8fd73291cff9084 not found: ID does not exist" Feb 28 05:05:47 crc kubenswrapper[5014]: I0228 05:05:47.363025 5014 generic.go:334] "Generic (PLEG): container finished" podID="3d570627-429c-4a9c-a45a-55d652968c46" containerID="d0721e997cffaf80544adf455228004976f339e695e7278d865e493236c708fa" exitCode=0 Feb 28 05:05:47 crc kubenswrapper[5014]: I0228 05:05:47.363104 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mbxvz" event={"ID":"3d570627-429c-4a9c-a45a-55d652968c46","Type":"ContainerDied","Data":"d0721e997cffaf80544adf455228004976f339e695e7278d865e493236c708fa"} Feb 28 05:05:48 crc kubenswrapper[5014]: I0228 05:05:48.185408 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4edfdc2b-038c-49c4-ab74-2b2f402457c9" path="/var/lib/kubelet/pods/4edfdc2b-038c-49c4-ab74-2b2f402457c9/volumes" Feb 28 05:05:48 crc kubenswrapper[5014]: I0228 05:05:48.848766 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mbxvz" Feb 28 05:05:48 crc kubenswrapper[5014]: I0228 05:05:48.977940 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szrdk\" (UniqueName: \"kubernetes.io/projected/3d570627-429c-4a9c-a45a-55d652968c46-kube-api-access-szrdk\") pod \"3d570627-429c-4a9c-a45a-55d652968c46\" (UID: \"3d570627-429c-4a9c-a45a-55d652968c46\") " Feb 28 05:05:48 crc kubenswrapper[5014]: I0228 05:05:48.978219 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d570627-429c-4a9c-a45a-55d652968c46-ssh-key-openstack-edpm-ipam\") pod \"3d570627-429c-4a9c-a45a-55d652968c46\" (UID: \"3d570627-429c-4a9c-a45a-55d652968c46\") " Feb 28 05:05:48 crc kubenswrapper[5014]: I0228 05:05:48.978277 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d570627-429c-4a9c-a45a-55d652968c46-inventory\") pod \"3d570627-429c-4a9c-a45a-55d652968c46\" (UID: \"3d570627-429c-4a9c-a45a-55d652968c46\") " Feb 28 05:05:48 crc kubenswrapper[5014]: I0228 05:05:48.985116 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d570627-429c-4a9c-a45a-55d652968c46-kube-api-access-szrdk" (OuterVolumeSpecName: "kube-api-access-szrdk") pod "3d570627-429c-4a9c-a45a-55d652968c46" (UID: "3d570627-429c-4a9c-a45a-55d652968c46"). InnerVolumeSpecName "kube-api-access-szrdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.006468 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d570627-429c-4a9c-a45a-55d652968c46-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3d570627-429c-4a9c-a45a-55d652968c46" (UID: "3d570627-429c-4a9c-a45a-55d652968c46"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.013414 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d570627-429c-4a9c-a45a-55d652968c46-inventory" (OuterVolumeSpecName: "inventory") pod "3d570627-429c-4a9c-a45a-55d652968c46" (UID: "3d570627-429c-4a9c-a45a-55d652968c46"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.080171 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szrdk\" (UniqueName: \"kubernetes.io/projected/3d570627-429c-4a9c-a45a-55d652968c46-kube-api-access-szrdk\") on node \"crc\" DevicePath \"\"" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.080218 5014 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3d570627-429c-4a9c-a45a-55d652968c46-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.080232 5014 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3d570627-429c-4a9c-a45a-55d652968c46-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.385680 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mbxvz" event={"ID":"3d570627-429c-4a9c-a45a-55d652968c46","Type":"ContainerDied","Data":"a88dd9c98fb617f31ae105967c0607fe1efe8dc52e4f383e50060bc014d556e3"} Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.385724 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a88dd9c98fb617f31ae105967c0607fe1efe8dc52e4f383e50060bc014d556e3" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.385753 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-mbxvz" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.480090 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4"] Feb 28 05:05:49 crc kubenswrapper[5014]: E0228 05:05:49.480626 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4edfdc2b-038c-49c4-ab74-2b2f402457c9" containerName="registry-server" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.480647 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="4edfdc2b-038c-49c4-ab74-2b2f402457c9" containerName="registry-server" Feb 28 05:05:49 crc kubenswrapper[5014]: E0228 05:05:49.480659 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4edfdc2b-038c-49c4-ab74-2b2f402457c9" containerName="extract-content" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.480668 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="4edfdc2b-038c-49c4-ab74-2b2f402457c9" containerName="extract-content" Feb 28 05:05:49 crc kubenswrapper[5014]: E0228 05:05:49.480683 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4edfdc2b-038c-49c4-ab74-2b2f402457c9" containerName="extract-utilities" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.480691 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="4edfdc2b-038c-49c4-ab74-2b2f402457c9" containerName="extract-utilities" Feb 28 05:05:49 crc kubenswrapper[5014]: E0228 05:05:49.480716 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d570627-429c-4a9c-a45a-55d652968c46" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.480724 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d570627-429c-4a9c-a45a-55d652968c46" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.480952 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d570627-429c-4a9c-a45a-55d652968c46" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.480971 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="4edfdc2b-038c-49c4-ab74-2b2f402457c9" containerName="registry-server" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.481761 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.484090 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6dz6b" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.485979 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.487302 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.488008 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.488764 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4"] Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.588836 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrp2l\" (UniqueName: \"kubernetes.io/projected/04a4501f-8652-4960-aa15-e083bf2c5b68-kube-api-access-mrp2l\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4\" (UID: \"04a4501f-8652-4960-aa15-e083bf2c5b68\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.588934 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04a4501f-8652-4960-aa15-e083bf2c5b68-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4\" (UID: \"04a4501f-8652-4960-aa15-e083bf2c5b68\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.589090 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04a4501f-8652-4960-aa15-e083bf2c5b68-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4\" (UID: \"04a4501f-8652-4960-aa15-e083bf2c5b68\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.690647 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04a4501f-8652-4960-aa15-e083bf2c5b68-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4\" (UID: \"04a4501f-8652-4960-aa15-e083bf2c5b68\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.690755 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04a4501f-8652-4960-aa15-e083bf2c5b68-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4\" (UID: \"04a4501f-8652-4960-aa15-e083bf2c5b68\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.690853 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrp2l\" (UniqueName: \"kubernetes.io/projected/04a4501f-8652-4960-aa15-e083bf2c5b68-kube-api-access-mrp2l\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4\" (UID: \"04a4501f-8652-4960-aa15-e083bf2c5b68\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.696293 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04a4501f-8652-4960-aa15-e083bf2c5b68-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4\" (UID: \"04a4501f-8652-4960-aa15-e083bf2c5b68\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.697194 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04a4501f-8652-4960-aa15-e083bf2c5b68-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4\" (UID: \"04a4501f-8652-4960-aa15-e083bf2c5b68\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.724027 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrp2l\" (UniqueName: \"kubernetes.io/projected/04a4501f-8652-4960-aa15-e083bf2c5b68-kube-api-access-mrp2l\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4\" (UID: \"04a4501f-8652-4960-aa15-e083bf2c5b68\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4" Feb 28 05:05:49 crc kubenswrapper[5014]: I0228 05:05:49.802852 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4" Feb 28 05:05:50 crc kubenswrapper[5014]: I0228 05:05:50.431904 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4"] Feb 28 05:05:51 crc kubenswrapper[5014]: I0228 05:05:51.409512 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4" event={"ID":"04a4501f-8652-4960-aa15-e083bf2c5b68","Type":"ContainerStarted","Data":"39e5d28e34450f7cba22180303f55cce126692b6b2d6df5bf2ced0576e76e208"} Feb 28 05:05:51 crc kubenswrapper[5014]: I0228 05:05:51.410139 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4" event={"ID":"04a4501f-8652-4960-aa15-e083bf2c5b68","Type":"ContainerStarted","Data":"e910c0bf71ca4e0181dce3f76a504d0f73287bfe66aa6a20ad9b56befccd21d2"} Feb 28 05:05:51 crc kubenswrapper[5014]: I0228 05:05:51.435505 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4" podStartSLOduration=2.007479863 podStartE2EDuration="2.43547894s" podCreationTimestamp="2026-02-28 05:05:49 +0000 UTC" firstStartedPulling="2026-02-28 05:05:50.432203113 +0000 UTC m=+1939.102329063" lastFinishedPulling="2026-02-28 05:05:50.86020223 +0000 UTC m=+1939.530328140" observedRunningTime="2026-02-28 05:05:51.428367676 +0000 UTC m=+1940.098493586" watchObservedRunningTime="2026-02-28 05:05:51.43547894 +0000 UTC m=+1940.105604860" Feb 28 05:05:57 crc kubenswrapper[5014]: I0228 05:05:57.065107 5014 scope.go:117] "RemoveContainer" containerID="b1dd5b5ba21cf1804bc869620e9853290a2dddf95a8896d0ee3babae155e8083" Feb 28 05:05:57 crc kubenswrapper[5014]: I0228 05:05:57.109462 5014 scope.go:117] "RemoveContainer" containerID="8a448f0ca7e013cbc317b3a7ab992f0d25fccd158f4c18c910f802df198f4a0f" Feb 28 05:05:57 crc kubenswrapper[5014]: I0228 05:05:57.143840 5014 scope.go:117] "RemoveContainer" containerID="f309a6be4a34fc7643f1ea01c54cfa09bfd84d2ee5ea74c1f0a01c7e3de4583c" Feb 28 05:05:57 crc kubenswrapper[5014]: I0228 05:05:57.171556 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:05:57 crc kubenswrapper[5014]: E0228 05:05:57.171974 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:06:00 crc kubenswrapper[5014]: I0228 05:06:00.148962 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537586-dmm99"] Feb 28 05:06:00 crc kubenswrapper[5014]: I0228 05:06:00.151242 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537586-dmm99" Feb 28 05:06:00 crc kubenswrapper[5014]: I0228 05:06:00.153182 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:06:00 crc kubenswrapper[5014]: I0228 05:06:00.154495 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:06:00 crc kubenswrapper[5014]: I0228 05:06:00.154515 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:06:00 crc kubenswrapper[5014]: I0228 05:06:00.164479 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537586-dmm99"] Feb 28 05:06:00 crc kubenswrapper[5014]: I0228 05:06:00.298194 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4vkk\" (UniqueName: \"kubernetes.io/projected/9991fa5e-8673-41c7-8061-a43c23654a6b-kube-api-access-v4vkk\") pod \"auto-csr-approver-29537586-dmm99\" (UID: \"9991fa5e-8673-41c7-8061-a43c23654a6b\") " pod="openshift-infra/auto-csr-approver-29537586-dmm99" Feb 28 05:06:00 crc kubenswrapper[5014]: I0228 05:06:00.400568 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4vkk\" (UniqueName: \"kubernetes.io/projected/9991fa5e-8673-41c7-8061-a43c23654a6b-kube-api-access-v4vkk\") pod \"auto-csr-approver-29537586-dmm99\" (UID: \"9991fa5e-8673-41c7-8061-a43c23654a6b\") " pod="openshift-infra/auto-csr-approver-29537586-dmm99" Feb 28 05:06:00 crc kubenswrapper[5014]: I0228 05:06:00.436413 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4vkk\" (UniqueName: \"kubernetes.io/projected/9991fa5e-8673-41c7-8061-a43c23654a6b-kube-api-access-v4vkk\") pod \"auto-csr-approver-29537586-dmm99\" (UID: \"9991fa5e-8673-41c7-8061-a43c23654a6b\") " pod="openshift-infra/auto-csr-approver-29537586-dmm99" Feb 28 05:06:00 crc kubenswrapper[5014]: I0228 05:06:00.470200 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537586-dmm99" Feb 28 05:06:00 crc kubenswrapper[5014]: I0228 05:06:00.503504 5014 generic.go:334] "Generic (PLEG): container finished" podID="04a4501f-8652-4960-aa15-e083bf2c5b68" containerID="39e5d28e34450f7cba22180303f55cce126692b6b2d6df5bf2ced0576e76e208" exitCode=0 Feb 28 05:06:00 crc kubenswrapper[5014]: I0228 05:06:00.503568 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4" event={"ID":"04a4501f-8652-4960-aa15-e083bf2c5b68","Type":"ContainerDied","Data":"39e5d28e34450f7cba22180303f55cce126692b6b2d6df5bf2ced0576e76e208"} Feb 28 05:06:00 crc kubenswrapper[5014]: I0228 05:06:00.920234 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537586-dmm99"] Feb 28 05:06:01 crc kubenswrapper[5014]: I0228 05:06:01.514788 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537586-dmm99" event={"ID":"9991fa5e-8673-41c7-8061-a43c23654a6b","Type":"ContainerStarted","Data":"10a90cb2f2271ddbb5c234dda89c69f0c91b66597c18018b652f697ed2361867"} Feb 28 05:06:01 crc kubenswrapper[5014]: I0228 05:06:01.950523 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.037553 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04a4501f-8652-4960-aa15-e083bf2c5b68-inventory\") pod \"04a4501f-8652-4960-aa15-e083bf2c5b68\" (UID: \"04a4501f-8652-4960-aa15-e083bf2c5b68\") " Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.037775 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04a4501f-8652-4960-aa15-e083bf2c5b68-ssh-key-openstack-edpm-ipam\") pod \"04a4501f-8652-4960-aa15-e083bf2c5b68\" (UID: \"04a4501f-8652-4960-aa15-e083bf2c5b68\") " Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.037824 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrp2l\" (UniqueName: \"kubernetes.io/projected/04a4501f-8652-4960-aa15-e083bf2c5b68-kube-api-access-mrp2l\") pod \"04a4501f-8652-4960-aa15-e083bf2c5b68\" (UID: \"04a4501f-8652-4960-aa15-e083bf2c5b68\") " Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.044539 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04a4501f-8652-4960-aa15-e083bf2c5b68-kube-api-access-mrp2l" (OuterVolumeSpecName: "kube-api-access-mrp2l") pod "04a4501f-8652-4960-aa15-e083bf2c5b68" (UID: "04a4501f-8652-4960-aa15-e083bf2c5b68"). InnerVolumeSpecName "kube-api-access-mrp2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.067279 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a4501f-8652-4960-aa15-e083bf2c5b68-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "04a4501f-8652-4960-aa15-e083bf2c5b68" (UID: "04a4501f-8652-4960-aa15-e083bf2c5b68"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.067386 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04a4501f-8652-4960-aa15-e083bf2c5b68-inventory" (OuterVolumeSpecName: "inventory") pod "04a4501f-8652-4960-aa15-e083bf2c5b68" (UID: "04a4501f-8652-4960-aa15-e083bf2c5b68"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.140132 5014 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/04a4501f-8652-4960-aa15-e083bf2c5b68-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.140166 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrp2l\" (UniqueName: \"kubernetes.io/projected/04a4501f-8652-4960-aa15-e083bf2c5b68-kube-api-access-mrp2l\") on node \"crc\" DevicePath \"\"" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.140176 5014 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/04a4501f-8652-4960-aa15-e083bf2c5b68-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.526503 5014 generic.go:334] "Generic (PLEG): container finished" podID="9991fa5e-8673-41c7-8061-a43c23654a6b" containerID="5676269a0cd8da3c375f1ae3e8e646559243e08af52ed02066259e681770b2e3" exitCode=0 Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.526585 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537586-dmm99" event={"ID":"9991fa5e-8673-41c7-8061-a43c23654a6b","Type":"ContainerDied","Data":"5676269a0cd8da3c375f1ae3e8e646559243e08af52ed02066259e681770b2e3"} Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.529599 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4" event={"ID":"04a4501f-8652-4960-aa15-e083bf2c5b68","Type":"ContainerDied","Data":"e910c0bf71ca4e0181dce3f76a504d0f73287bfe66aa6a20ad9b56befccd21d2"} Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.529640 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e910c0bf71ca4e0181dce3f76a504d0f73287bfe66aa6a20ad9b56befccd21d2" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.529634 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.608482 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b"] Feb 28 05:06:02 crc kubenswrapper[5014]: E0228 05:06:02.609387 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04a4501f-8652-4960-aa15-e083bf2c5b68" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.609416 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="04a4501f-8652-4960-aa15-e083bf2c5b68" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.609681 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="04a4501f-8652-4960-aa15-e083bf2c5b68" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.610466 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.613833 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.614473 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.614696 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6dz6b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.614979 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.615138 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.616108 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.616360 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.616412 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.635451 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b"] Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.753474 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.753727 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.753849 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.753990 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.754076 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.754159 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.754239 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.754315 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.754424 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.754530 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.754608 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.754688 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6d24l\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-kube-api-access-6d24l\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.754844 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.754906 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.856970 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6d24l\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-kube-api-access-6d24l\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.857031 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.857057 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.857110 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.857132 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.857150 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.857195 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.857213 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.857247 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.857283 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.857317 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.857354 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.857390 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.857408 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.863518 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.864499 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.865323 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.865360 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.865972 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.866523 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.866679 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.867048 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.868008 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.869475 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.870986 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.871466 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.873503 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.891972 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6d24l\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-kube-api-access-6d24l\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:02 crc kubenswrapper[5014]: I0228 05:06:02.943555 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:03 crc kubenswrapper[5014]: I0228 05:06:03.544852 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b"] Feb 28 05:06:03 crc kubenswrapper[5014]: W0228 05:06:03.554735 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd7991b4_f7f5_4c3e_b2e6_7ba07d7d15a1.slice/crio-e0368965033540ddb4e84fa52e8d37c5aceabc3dbc785ca94bb7ec286619e3bd WatchSource:0}: Error finding container e0368965033540ddb4e84fa52e8d37c5aceabc3dbc785ca94bb7ec286619e3bd: Status 404 returned error can't find the container with id e0368965033540ddb4e84fa52e8d37c5aceabc3dbc785ca94bb7ec286619e3bd Feb 28 05:06:03 crc kubenswrapper[5014]: I0228 05:06:03.827737 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537586-dmm99" Feb 28 05:06:03 crc kubenswrapper[5014]: I0228 05:06:03.981688 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4vkk\" (UniqueName: \"kubernetes.io/projected/9991fa5e-8673-41c7-8061-a43c23654a6b-kube-api-access-v4vkk\") pod \"9991fa5e-8673-41c7-8061-a43c23654a6b\" (UID: \"9991fa5e-8673-41c7-8061-a43c23654a6b\") " Feb 28 05:06:03 crc kubenswrapper[5014]: I0228 05:06:03.984962 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9991fa5e-8673-41c7-8061-a43c23654a6b-kube-api-access-v4vkk" (OuterVolumeSpecName: "kube-api-access-v4vkk") pod "9991fa5e-8673-41c7-8061-a43c23654a6b" (UID: "9991fa5e-8673-41c7-8061-a43c23654a6b"). InnerVolumeSpecName "kube-api-access-v4vkk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:06:04 crc kubenswrapper[5014]: I0228 05:06:04.084516 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4vkk\" (UniqueName: \"kubernetes.io/projected/9991fa5e-8673-41c7-8061-a43c23654a6b-kube-api-access-v4vkk\") on node \"crc\" DevicePath \"\"" Feb 28 05:06:04 crc kubenswrapper[5014]: I0228 05:06:04.551679 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537586-dmm99" Feb 28 05:06:04 crc kubenswrapper[5014]: I0228 05:06:04.551700 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537586-dmm99" event={"ID":"9991fa5e-8673-41c7-8061-a43c23654a6b","Type":"ContainerDied","Data":"10a90cb2f2271ddbb5c234dda89c69f0c91b66597c18018b652f697ed2361867"} Feb 28 05:06:04 crc kubenswrapper[5014]: I0228 05:06:04.551740 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10a90cb2f2271ddbb5c234dda89c69f0c91b66597c18018b652f697ed2361867" Feb 28 05:06:04 crc kubenswrapper[5014]: I0228 05:06:04.553466 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" event={"ID":"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1","Type":"ContainerStarted","Data":"3babb12570d0e7301f22c942d9d2186b29abf34a5695128ab97bffe60a14c14b"} Feb 28 05:06:04 crc kubenswrapper[5014]: I0228 05:06:04.553534 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" event={"ID":"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1","Type":"ContainerStarted","Data":"e0368965033540ddb4e84fa52e8d37c5aceabc3dbc785ca94bb7ec286619e3bd"} Feb 28 05:06:04 crc kubenswrapper[5014]: I0228 05:06:04.581659 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" podStartSLOduration=2.166432146 podStartE2EDuration="2.581640065s" podCreationTimestamp="2026-02-28 05:06:02 +0000 UTC" firstStartedPulling="2026-02-28 05:06:03.559988566 +0000 UTC m=+1952.230114486" lastFinishedPulling="2026-02-28 05:06:03.975196495 +0000 UTC m=+1952.645322405" observedRunningTime="2026-02-28 05:06:04.57337868 +0000 UTC m=+1953.243504610" watchObservedRunningTime="2026-02-28 05:06:04.581640065 +0000 UTC m=+1953.251765985" Feb 28 05:06:04 crc kubenswrapper[5014]: I0228 05:06:04.934970 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537580-fxfr8"] Feb 28 05:06:04 crc kubenswrapper[5014]: I0228 05:06:04.944452 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537580-fxfr8"] Feb 28 05:06:06 crc kubenswrapper[5014]: I0228 05:06:06.199113 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88defcda-3a2d-400f-8906-0c8c958c8f31" path="/var/lib/kubelet/pods/88defcda-3a2d-400f-8906-0c8c958c8f31/volumes" Feb 28 05:06:11 crc kubenswrapper[5014]: I0228 05:06:11.172490 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:06:11 crc kubenswrapper[5014]: E0228 05:06:11.173696 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:06:17 crc kubenswrapper[5014]: I0228 05:06:17.055397 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-xz9gg"] Feb 28 05:06:17 crc kubenswrapper[5014]: I0228 05:06:17.071648 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-xz9gg"] Feb 28 05:06:18 crc kubenswrapper[5014]: I0228 05:06:18.191998 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47e5f3ce-9596-4be2-a8e1-363a7abd090f" path="/var/lib/kubelet/pods/47e5f3ce-9596-4be2-a8e1-363a7abd090f/volumes" Feb 28 05:06:23 crc kubenswrapper[5014]: I0228 05:06:23.172127 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:06:23 crc kubenswrapper[5014]: I0228 05:06:23.728463 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerStarted","Data":"0cf39994ea3bad20406b99bb9d09d57069d0bc9c30b59c1f02196a3ad836f5b7"} Feb 28 05:06:43 crc kubenswrapper[5014]: I0228 05:06:43.916400 5014 generic.go:334] "Generic (PLEG): container finished" podID="fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1" containerID="3babb12570d0e7301f22c942d9d2186b29abf34a5695128ab97bffe60a14c14b" exitCode=0 Feb 28 05:06:43 crc kubenswrapper[5014]: I0228 05:06:43.917001 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" event={"ID":"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1","Type":"ContainerDied","Data":"3babb12570d0e7301f22c942d9d2186b29abf34a5695128ab97bffe60a14c14b"} Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.412361 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.472578 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-neutron-metadata-combined-ca-bundle\") pod \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.472689 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-ovn-combined-ca-bundle\") pod \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.472716 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-inventory\") pod \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.472741 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-ssh-key-openstack-edpm-ipam\") pod \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.472792 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.472838 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-openstack-edpm-ipam-ovn-default-certs-0\") pod \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.472884 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-bootstrap-combined-ca-bundle\") pod \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.472902 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.472926 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.472947 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-nova-combined-ca-bundle\") pod \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.472981 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-telemetry-combined-ca-bundle\") pod \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.473034 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-repo-setup-combined-ca-bundle\") pod \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.473071 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-libvirt-combined-ca-bundle\") pod \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.473097 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6d24l\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-kube-api-access-6d24l\") pod \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\" (UID: \"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1\") " Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.479962 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1" (UID: "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.480758 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1" (UID: "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.481777 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1" (UID: "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.483114 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1" (UID: "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.483977 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1" (UID: "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.486938 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1" (UID: "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.487107 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-kube-api-access-6d24l" (OuterVolumeSpecName: "kube-api-access-6d24l") pod "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1" (UID: "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1"). InnerVolumeSpecName "kube-api-access-6d24l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.487175 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1" (UID: "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.487585 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1" (UID: "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.488721 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1" (UID: "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.488820 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1" (UID: "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.502992 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1" (UID: "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.508985 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1" (UID: "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.511432 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-inventory" (OuterVolumeSpecName: "inventory") pod "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1" (UID: "fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.574895 5014 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.574930 5014 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.574944 5014 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.574956 5014 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.574994 5014 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.575004 5014 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.575012 5014 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.575020 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6d24l\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-kube-api-access-6d24l\") on node \"crc\" DevicePath \"\"" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.575029 5014 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.575039 5014 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.575102 5014 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.575134 5014 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.575142 5014 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.575169 5014 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.941686 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" event={"ID":"fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1","Type":"ContainerDied","Data":"e0368965033540ddb4e84fa52e8d37c5aceabc3dbc785ca94bb7ec286619e3bd"} Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.942113 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0368965033540ddb4e84fa52e8d37c5aceabc3dbc785ca94bb7ec286619e3bd" Feb 28 05:06:45 crc kubenswrapper[5014]: I0228 05:06:45.942031 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.163353 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz"] Feb 28 05:06:46 crc kubenswrapper[5014]: E0228 05:06:46.163793 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.163837 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 28 05:06:46 crc kubenswrapper[5014]: E0228 05:06:46.163871 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9991fa5e-8673-41c7-8061-a43c23654a6b" containerName="oc" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.163879 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="9991fa5e-8673-41c7-8061-a43c23654a6b" containerName="oc" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.164097 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.164123 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="9991fa5e-8673-41c7-8061-a43c23654a6b" containerName="oc" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.177114 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz"] Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.178950 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.180973 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6dz6b" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.181215 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.181284 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.181359 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.182183 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.293729 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdp8j\" (UniqueName: \"kubernetes.io/projected/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-kube-api-access-sdp8j\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gdrsz\" (UID: \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.294017 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gdrsz\" (UID: \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.294590 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gdrsz\" (UID: \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.294905 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gdrsz\" (UID: \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.294994 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gdrsz\" (UID: \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.397070 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gdrsz\" (UID: \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.397138 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gdrsz\" (UID: \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.399390 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdp8j\" (UniqueName: \"kubernetes.io/projected/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-kube-api-access-sdp8j\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gdrsz\" (UID: \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.399458 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gdrsz\" (UID: \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.399583 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gdrsz\" (UID: \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.404029 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gdrsz\" (UID: \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.404590 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gdrsz\" (UID: \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.414206 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gdrsz\" (UID: \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.416205 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gdrsz\" (UID: \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.423256 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdp8j\" (UniqueName: \"kubernetes.io/projected/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-kube-api-access-sdp8j\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-gdrsz\" (UID: \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz" Feb 28 05:06:46 crc kubenswrapper[5014]: I0228 05:06:46.538250 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz" Feb 28 05:06:47 crc kubenswrapper[5014]: I0228 05:06:47.096686 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz"] Feb 28 05:06:47 crc kubenswrapper[5014]: W0228 05:06:47.099142 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podab8babaf_acb3_4c27_a8bd_abc56808e9d7.slice/crio-dbc49247b22ed3ad5304e96eec292e2a243cfa081bd8b58dcd864a62b83d796f WatchSource:0}: Error finding container dbc49247b22ed3ad5304e96eec292e2a243cfa081bd8b58dcd864a62b83d796f: Status 404 returned error can't find the container with id dbc49247b22ed3ad5304e96eec292e2a243cfa081bd8b58dcd864a62b83d796f Feb 28 05:06:47 crc kubenswrapper[5014]: I0228 05:06:47.964169 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz" event={"ID":"ab8babaf-acb3-4c27-a8bd-abc56808e9d7","Type":"ContainerStarted","Data":"eac82fdff9317f0bc962361a03cb26f8906d85b734fedc748f802d9bd79d10ee"} Feb 28 05:06:47 crc kubenswrapper[5014]: I0228 05:06:47.964796 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz" event={"ID":"ab8babaf-acb3-4c27-a8bd-abc56808e9d7","Type":"ContainerStarted","Data":"dbc49247b22ed3ad5304e96eec292e2a243cfa081bd8b58dcd864a62b83d796f"} Feb 28 05:06:47 crc kubenswrapper[5014]: I0228 05:06:47.996720 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz" podStartSLOduration=1.6016272919999999 podStartE2EDuration="1.996697093s" podCreationTimestamp="2026-02-28 05:06:46 +0000 UTC" firstStartedPulling="2026-02-28 05:06:47.10225893 +0000 UTC m=+1995.772384840" lastFinishedPulling="2026-02-28 05:06:47.497328721 +0000 UTC m=+1996.167454641" observedRunningTime="2026-02-28 05:06:47.98666042 +0000 UTC m=+1996.656786340" watchObservedRunningTime="2026-02-28 05:06:47.996697093 +0000 UTC m=+1996.666823013" Feb 28 05:06:57 crc kubenswrapper[5014]: I0228 05:06:57.251000 5014 scope.go:117] "RemoveContainer" containerID="4336e634cd3bfc21cb020210e89a64d1b7796e122363abe138093d9133be63cb" Feb 28 05:06:57 crc kubenswrapper[5014]: I0228 05:06:57.315514 5014 scope.go:117] "RemoveContainer" containerID="945a4a6d7e42e896cfa5eed88c95b74d1e4eba29597b63eb863f7e55fb09e0ae" Feb 28 05:07:02 crc kubenswrapper[5014]: I0228 05:07:02.765113 5014 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6c68684b95-vvvhf" podUID="6d31e889-55bb-4dc4-b470-dcb11b4438a7" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 28 05:07:49 crc kubenswrapper[5014]: I0228 05:07:49.627658 5014 generic.go:334] "Generic (PLEG): container finished" podID="ab8babaf-acb3-4c27-a8bd-abc56808e9d7" containerID="eac82fdff9317f0bc962361a03cb26f8906d85b734fedc748f802d9bd79d10ee" exitCode=0 Feb 28 05:07:49 crc kubenswrapper[5014]: I0228 05:07:49.627768 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz" event={"ID":"ab8babaf-acb3-4c27-a8bd-abc56808e9d7","Type":"ContainerDied","Data":"eac82fdff9317f0bc962361a03cb26f8906d85b734fedc748f802d9bd79d10ee"} Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.107592 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.259590 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-ssh-key-openstack-edpm-ipam\") pod \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\" (UID: \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\") " Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.259749 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-ovncontroller-config-0\") pod \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\" (UID: \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\") " Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.259899 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-inventory\") pod \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\" (UID: \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\") " Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.259956 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-ovn-combined-ca-bundle\") pod \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\" (UID: \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\") " Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.260204 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdp8j\" (UniqueName: \"kubernetes.io/projected/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-kube-api-access-sdp8j\") pod \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\" (UID: \"ab8babaf-acb3-4c27-a8bd-abc56808e9d7\") " Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.266358 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-kube-api-access-sdp8j" (OuterVolumeSpecName: "kube-api-access-sdp8j") pod "ab8babaf-acb3-4c27-a8bd-abc56808e9d7" (UID: "ab8babaf-acb3-4c27-a8bd-abc56808e9d7"). InnerVolumeSpecName "kube-api-access-sdp8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.266430 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ab8babaf-acb3-4c27-a8bd-abc56808e9d7" (UID: "ab8babaf-acb3-4c27-a8bd-abc56808e9d7"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.289043 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ab8babaf-acb3-4c27-a8bd-abc56808e9d7" (UID: "ab8babaf-acb3-4c27-a8bd-abc56808e9d7"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.293615 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "ab8babaf-acb3-4c27-a8bd-abc56808e9d7" (UID: "ab8babaf-acb3-4c27-a8bd-abc56808e9d7"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.320028 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-inventory" (OuterVolumeSpecName: "inventory") pod "ab8babaf-acb3-4c27-a8bd-abc56808e9d7" (UID: "ab8babaf-acb3-4c27-a8bd-abc56808e9d7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.363122 5014 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.363189 5014 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.363208 5014 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.363226 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdp8j\" (UniqueName: \"kubernetes.io/projected/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-kube-api-access-sdp8j\") on node \"crc\" DevicePath \"\"" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.363242 5014 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ab8babaf-acb3-4c27-a8bd-abc56808e9d7-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.648203 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz" event={"ID":"ab8babaf-acb3-4c27-a8bd-abc56808e9d7","Type":"ContainerDied","Data":"dbc49247b22ed3ad5304e96eec292e2a243cfa081bd8b58dcd864a62b83d796f"} Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.648266 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbc49247b22ed3ad5304e96eec292e2a243cfa081bd8b58dcd864a62b83d796f" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.648294 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-gdrsz" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.788218 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq"] Feb 28 05:07:51 crc kubenswrapper[5014]: E0228 05:07:51.788798 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab8babaf-acb3-4c27-a8bd-abc56808e9d7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.788846 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab8babaf-acb3-4c27-a8bd-abc56808e9d7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.789158 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab8babaf-acb3-4c27-a8bd-abc56808e9d7" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.790171 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.793554 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.794303 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.794698 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.795018 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.795266 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6dz6b" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.797112 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.827035 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq"] Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.977920 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq\" (UID: \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.978325 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq\" (UID: \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.978452 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m8w8\" (UniqueName: \"kubernetes.io/projected/8746177b-a5ee-41d6-8d6c-94e7eae1082e-kube-api-access-5m8w8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq\" (UID: \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.978575 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq\" (UID: \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.978600 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq\" (UID: \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" Feb 28 05:07:51 crc kubenswrapper[5014]: I0228 05:07:51.978627 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq\" (UID: \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" Feb 28 05:07:52 crc kubenswrapper[5014]: I0228 05:07:52.080569 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq\" (UID: \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" Feb 28 05:07:52 crc kubenswrapper[5014]: I0228 05:07:52.080657 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq\" (UID: \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" Feb 28 05:07:52 crc kubenswrapper[5014]: I0228 05:07:52.080770 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5m8w8\" (UniqueName: \"kubernetes.io/projected/8746177b-a5ee-41d6-8d6c-94e7eae1082e-kube-api-access-5m8w8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq\" (UID: \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" Feb 28 05:07:52 crc kubenswrapper[5014]: I0228 05:07:52.080903 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq\" (UID: \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" Feb 28 05:07:52 crc kubenswrapper[5014]: I0228 05:07:52.080948 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq\" (UID: \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" Feb 28 05:07:52 crc kubenswrapper[5014]: I0228 05:07:52.080970 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq\" (UID: \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" Feb 28 05:07:52 crc kubenswrapper[5014]: I0228 05:07:52.086400 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq\" (UID: \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" Feb 28 05:07:52 crc kubenswrapper[5014]: I0228 05:07:52.087849 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq\" (UID: \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" Feb 28 05:07:52 crc kubenswrapper[5014]: I0228 05:07:52.088527 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq\" (UID: \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" Feb 28 05:07:52 crc kubenswrapper[5014]: I0228 05:07:52.094264 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq\" (UID: \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" Feb 28 05:07:52 crc kubenswrapper[5014]: I0228 05:07:52.095699 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq\" (UID: \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" Feb 28 05:07:52 crc kubenswrapper[5014]: I0228 05:07:52.102319 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5m8w8\" (UniqueName: \"kubernetes.io/projected/8746177b-a5ee-41d6-8d6c-94e7eae1082e-kube-api-access-5m8w8\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq\" (UID: \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" Feb 28 05:07:52 crc kubenswrapper[5014]: I0228 05:07:52.140601 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" Feb 28 05:07:52 crc kubenswrapper[5014]: I0228 05:07:52.719958 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq"] Feb 28 05:07:52 crc kubenswrapper[5014]: W0228 05:07:52.721249 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8746177b_a5ee_41d6_8d6c_94e7eae1082e.slice/crio-79de26fab5d8dc6a35d88be6df41f889f7593b22c08ad00b67e7618d1709f0a1 WatchSource:0}: Error finding container 79de26fab5d8dc6a35d88be6df41f889f7593b22c08ad00b67e7618d1709f0a1: Status 404 returned error can't find the container with id 79de26fab5d8dc6a35d88be6df41f889f7593b22c08ad00b67e7618d1709f0a1 Feb 28 05:07:53 crc kubenswrapper[5014]: I0228 05:07:53.670488 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" event={"ID":"8746177b-a5ee-41d6-8d6c-94e7eae1082e","Type":"ContainerStarted","Data":"963b6e79f6ef07f23155433737b0e43536c50763056f7360c50b09962cce0d3a"} Feb 28 05:07:53 crc kubenswrapper[5014]: I0228 05:07:53.670897 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" event={"ID":"8746177b-a5ee-41d6-8d6c-94e7eae1082e","Type":"ContainerStarted","Data":"79de26fab5d8dc6a35d88be6df41f889f7593b22c08ad00b67e7618d1709f0a1"} Feb 28 05:07:53 crc kubenswrapper[5014]: I0228 05:07:53.687302 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" podStartSLOduration=2.242822721 podStartE2EDuration="2.687280424s" podCreationTimestamp="2026-02-28 05:07:51 +0000 UTC" firstStartedPulling="2026-02-28 05:07:52.723620367 +0000 UTC m=+2061.393746287" lastFinishedPulling="2026-02-28 05:07:53.16807807 +0000 UTC m=+2061.838203990" observedRunningTime="2026-02-28 05:07:53.684465357 +0000 UTC m=+2062.354591267" watchObservedRunningTime="2026-02-28 05:07:53.687280424 +0000 UTC m=+2062.357406334" Feb 28 05:08:00 crc kubenswrapper[5014]: I0228 05:08:00.145405 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537588-z6fj9"] Feb 28 05:08:00 crc kubenswrapper[5014]: I0228 05:08:00.147882 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537588-z6fj9" Feb 28 05:08:00 crc kubenswrapper[5014]: I0228 05:08:00.150030 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:08:00 crc kubenswrapper[5014]: I0228 05:08:00.150179 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:08:00 crc kubenswrapper[5014]: I0228 05:08:00.151039 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:08:00 crc kubenswrapper[5014]: I0228 05:08:00.166957 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537588-z6fj9"] Feb 28 05:08:00 crc kubenswrapper[5014]: I0228 05:08:00.254380 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mssq8\" (UniqueName: \"kubernetes.io/projected/43276879-eb1e-4f8d-929d-30c2d43663cb-kube-api-access-mssq8\") pod \"auto-csr-approver-29537588-z6fj9\" (UID: \"43276879-eb1e-4f8d-929d-30c2d43663cb\") " pod="openshift-infra/auto-csr-approver-29537588-z6fj9" Feb 28 05:08:00 crc kubenswrapper[5014]: I0228 05:08:00.356397 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mssq8\" (UniqueName: \"kubernetes.io/projected/43276879-eb1e-4f8d-929d-30c2d43663cb-kube-api-access-mssq8\") pod \"auto-csr-approver-29537588-z6fj9\" (UID: \"43276879-eb1e-4f8d-929d-30c2d43663cb\") " pod="openshift-infra/auto-csr-approver-29537588-z6fj9" Feb 28 05:08:00 crc kubenswrapper[5014]: I0228 05:08:00.395048 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mssq8\" (UniqueName: \"kubernetes.io/projected/43276879-eb1e-4f8d-929d-30c2d43663cb-kube-api-access-mssq8\") pod \"auto-csr-approver-29537588-z6fj9\" (UID: \"43276879-eb1e-4f8d-929d-30c2d43663cb\") " pod="openshift-infra/auto-csr-approver-29537588-z6fj9" Feb 28 05:08:00 crc kubenswrapper[5014]: I0228 05:08:00.467264 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537588-z6fj9" Feb 28 05:08:00 crc kubenswrapper[5014]: I0228 05:08:00.923849 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537588-z6fj9"] Feb 28 05:08:00 crc kubenswrapper[5014]: W0228 05:08:00.926361 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod43276879_eb1e_4f8d_929d_30c2d43663cb.slice/crio-52270ebc611e920e150b938b3ce61649e2e7f4fccd13d430239543b22baef1bf WatchSource:0}: Error finding container 52270ebc611e920e150b938b3ce61649e2e7f4fccd13d430239543b22baef1bf: Status 404 returned error can't find the container with id 52270ebc611e920e150b938b3ce61649e2e7f4fccd13d430239543b22baef1bf Feb 28 05:08:01 crc kubenswrapper[5014]: I0228 05:08:01.742721 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537588-z6fj9" event={"ID":"43276879-eb1e-4f8d-929d-30c2d43663cb","Type":"ContainerStarted","Data":"52270ebc611e920e150b938b3ce61649e2e7f4fccd13d430239543b22baef1bf"} Feb 28 05:08:02 crc kubenswrapper[5014]: I0228 05:08:02.755980 5014 generic.go:334] "Generic (PLEG): container finished" podID="43276879-eb1e-4f8d-929d-30c2d43663cb" containerID="f0348708d037922f7c1f4760c539f717f36d8c0c3fb3814fddf60f5a5e7f61fb" exitCode=0 Feb 28 05:08:02 crc kubenswrapper[5014]: I0228 05:08:02.756102 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537588-z6fj9" event={"ID":"43276879-eb1e-4f8d-929d-30c2d43663cb","Type":"ContainerDied","Data":"f0348708d037922f7c1f4760c539f717f36d8c0c3fb3814fddf60f5a5e7f61fb"} Feb 28 05:08:04 crc kubenswrapper[5014]: I0228 05:08:04.243289 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537588-z6fj9" Feb 28 05:08:04 crc kubenswrapper[5014]: I0228 05:08:04.331783 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mssq8\" (UniqueName: \"kubernetes.io/projected/43276879-eb1e-4f8d-929d-30c2d43663cb-kube-api-access-mssq8\") pod \"43276879-eb1e-4f8d-929d-30c2d43663cb\" (UID: \"43276879-eb1e-4f8d-929d-30c2d43663cb\") " Feb 28 05:08:04 crc kubenswrapper[5014]: I0228 05:08:04.337996 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43276879-eb1e-4f8d-929d-30c2d43663cb-kube-api-access-mssq8" (OuterVolumeSpecName: "kube-api-access-mssq8") pod "43276879-eb1e-4f8d-929d-30c2d43663cb" (UID: "43276879-eb1e-4f8d-929d-30c2d43663cb"). InnerVolumeSpecName "kube-api-access-mssq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:08:04 crc kubenswrapper[5014]: I0228 05:08:04.434517 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mssq8\" (UniqueName: \"kubernetes.io/projected/43276879-eb1e-4f8d-929d-30c2d43663cb-kube-api-access-mssq8\") on node \"crc\" DevicePath \"\"" Feb 28 05:08:04 crc kubenswrapper[5014]: I0228 05:08:04.786357 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537588-z6fj9" event={"ID":"43276879-eb1e-4f8d-929d-30c2d43663cb","Type":"ContainerDied","Data":"52270ebc611e920e150b938b3ce61649e2e7f4fccd13d430239543b22baef1bf"} Feb 28 05:08:04 crc kubenswrapper[5014]: I0228 05:08:04.786599 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52270ebc611e920e150b938b3ce61649e2e7f4fccd13d430239543b22baef1bf" Feb 28 05:08:04 crc kubenswrapper[5014]: I0228 05:08:04.786396 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537588-z6fj9" Feb 28 05:08:05 crc kubenswrapper[5014]: I0228 05:08:05.303847 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537582-d887m"] Feb 28 05:08:05 crc kubenswrapper[5014]: I0228 05:08:05.311244 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537582-d887m"] Feb 28 05:08:06 crc kubenswrapper[5014]: I0228 05:08:06.186621 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5df6daac-d482-491a-ab9a-10809fcbe91e" path="/var/lib/kubelet/pods/5df6daac-d482-491a-ab9a-10809fcbe91e/volumes" Feb 28 05:08:42 crc kubenswrapper[5014]: I0228 05:08:42.218669 5014 generic.go:334] "Generic (PLEG): container finished" podID="8746177b-a5ee-41d6-8d6c-94e7eae1082e" containerID="963b6e79f6ef07f23155433737b0e43536c50763056f7360c50b09962cce0d3a" exitCode=0 Feb 28 05:08:42 crc kubenswrapper[5014]: I0228 05:08:42.218836 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" event={"ID":"8746177b-a5ee-41d6-8d6c-94e7eae1082e","Type":"ContainerDied","Data":"963b6e79f6ef07f23155433737b0e43536c50763056f7360c50b09962cce0d3a"} Feb 28 05:08:43 crc kubenswrapper[5014]: I0228 05:08:43.705638 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" Feb 28 05:08:43 crc kubenswrapper[5014]: I0228 05:08:43.759994 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5m8w8\" (UniqueName: \"kubernetes.io/projected/8746177b-a5ee-41d6-8d6c-94e7eae1082e-kube-api-access-5m8w8\") pod \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\" (UID: \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\") " Feb 28 05:08:43 crc kubenswrapper[5014]: I0228 05:08:43.760065 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-neutron-metadata-combined-ca-bundle\") pod \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\" (UID: \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\") " Feb 28 05:08:43 crc kubenswrapper[5014]: I0228 05:08:43.760111 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-inventory\") pod \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\" (UID: \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\") " Feb 28 05:08:43 crc kubenswrapper[5014]: I0228 05:08:43.760191 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-neutron-ovn-metadata-agent-neutron-config-0\") pod \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\" (UID: \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\") " Feb 28 05:08:43 crc kubenswrapper[5014]: I0228 05:08:43.760285 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-ssh-key-openstack-edpm-ipam\") pod \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\" (UID: \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\") " Feb 28 05:08:43 crc kubenswrapper[5014]: I0228 05:08:43.760441 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-nova-metadata-neutron-config-0\") pod \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\" (UID: \"8746177b-a5ee-41d6-8d6c-94e7eae1082e\") " Feb 28 05:08:43 crc kubenswrapper[5014]: I0228 05:08:43.774443 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "8746177b-a5ee-41d6-8d6c-94e7eae1082e" (UID: "8746177b-a5ee-41d6-8d6c-94e7eae1082e"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:08:43 crc kubenswrapper[5014]: I0228 05:08:43.779082 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8746177b-a5ee-41d6-8d6c-94e7eae1082e-kube-api-access-5m8w8" (OuterVolumeSpecName: "kube-api-access-5m8w8") pod "8746177b-a5ee-41d6-8d6c-94e7eae1082e" (UID: "8746177b-a5ee-41d6-8d6c-94e7eae1082e"). InnerVolumeSpecName "kube-api-access-5m8w8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:08:43 crc kubenswrapper[5014]: I0228 05:08:43.810103 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "8746177b-a5ee-41d6-8d6c-94e7eae1082e" (UID: "8746177b-a5ee-41d6-8d6c-94e7eae1082e"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:08:43 crc kubenswrapper[5014]: I0228 05:08:43.813985 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-inventory" (OuterVolumeSpecName: "inventory") pod "8746177b-a5ee-41d6-8d6c-94e7eae1082e" (UID: "8746177b-a5ee-41d6-8d6c-94e7eae1082e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:08:43 crc kubenswrapper[5014]: I0228 05:08:43.816933 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "8746177b-a5ee-41d6-8d6c-94e7eae1082e" (UID: "8746177b-a5ee-41d6-8d6c-94e7eae1082e"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:08:43 crc kubenswrapper[5014]: I0228 05:08:43.817329 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8746177b-a5ee-41d6-8d6c-94e7eae1082e" (UID: "8746177b-a5ee-41d6-8d6c-94e7eae1082e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:08:43 crc kubenswrapper[5014]: I0228 05:08:43.862541 5014 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 05:08:43 crc kubenswrapper[5014]: I0228 05:08:43.862598 5014 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 28 05:08:43 crc kubenswrapper[5014]: I0228 05:08:43.862610 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5m8w8\" (UniqueName: \"kubernetes.io/projected/8746177b-a5ee-41d6-8d6c-94e7eae1082e-kube-api-access-5m8w8\") on node \"crc\" DevicePath \"\"" Feb 28 05:08:43 crc kubenswrapper[5014]: I0228 05:08:43.862619 5014 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 05:08:43 crc kubenswrapper[5014]: I0228 05:08:43.862629 5014 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 05:08:43 crc kubenswrapper[5014]: I0228 05:08:43.862657 5014 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/8746177b-a5ee-41d6-8d6c-94e7eae1082e-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.240130 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" event={"ID":"8746177b-a5ee-41d6-8d6c-94e7eae1082e","Type":"ContainerDied","Data":"79de26fab5d8dc6a35d88be6df41f889f7593b22c08ad00b67e7618d1709f0a1"} Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.240172 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79de26fab5d8dc6a35d88be6df41f889f7593b22c08ad00b67e7618d1709f0a1" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.240203 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.346744 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6"] Feb 28 05:08:44 crc kubenswrapper[5014]: E0228 05:08:44.347256 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43276879-eb1e-4f8d-929d-30c2d43663cb" containerName="oc" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.347281 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="43276879-eb1e-4f8d-929d-30c2d43663cb" containerName="oc" Feb 28 05:08:44 crc kubenswrapper[5014]: E0228 05:08:44.347295 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8746177b-a5ee-41d6-8d6c-94e7eae1082e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.347307 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="8746177b-a5ee-41d6-8d6c-94e7eae1082e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.347541 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="43276879-eb1e-4f8d-929d-30c2d43663cb" containerName="oc" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.347576 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="8746177b-a5ee-41d6-8d6c-94e7eae1082e" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.348314 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.350500 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.350884 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.351004 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6dz6b" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.351034 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.352005 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.358329 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6"] Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.476756 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6\" (UID: \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.476895 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6\" (UID: \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.476978 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl6pq\" (UniqueName: \"kubernetes.io/projected/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-kube-api-access-kl6pq\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6\" (UID: \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.477109 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6\" (UID: \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.477171 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6\" (UID: \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.579527 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6\" (UID: \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.579575 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6\" (UID: \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.579600 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl6pq\" (UniqueName: \"kubernetes.io/projected/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-kube-api-access-kl6pq\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6\" (UID: \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.579635 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6\" (UID: \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.579663 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6\" (UID: \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.583268 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6\" (UID: \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.584909 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6\" (UID: \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.585577 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6\" (UID: \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.598480 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6\" (UID: \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.608307 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl6pq\" (UniqueName: \"kubernetes.io/projected/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-kube-api-access-kl6pq\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6\" (UID: \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6" Feb 28 05:08:44 crc kubenswrapper[5014]: I0228 05:08:44.673909 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6" Feb 28 05:08:45 crc kubenswrapper[5014]: I0228 05:08:45.206962 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6"] Feb 28 05:08:45 crc kubenswrapper[5014]: I0228 05:08:45.262244 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6" event={"ID":"85e8a1f1-6f8c-4af8-9273-dc37192bea6a","Type":"ContainerStarted","Data":"4c3169a42a5d1e1819528461d7b53e2ed60f912d2def102586a74662cee37f96"} Feb 28 05:08:45 crc kubenswrapper[5014]: I0228 05:08:45.706609 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:08:45 crc kubenswrapper[5014]: I0228 05:08:45.706689 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:08:46 crc kubenswrapper[5014]: I0228 05:08:46.271015 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6" event={"ID":"85e8a1f1-6f8c-4af8-9273-dc37192bea6a","Type":"ContainerStarted","Data":"d1e0f391c6c0586332cddd21ebff8a8123f1f3dbe81cd0f1c9e7c064e7b8c1b1"} Feb 28 05:08:46 crc kubenswrapper[5014]: I0228 05:08:46.294685 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6" podStartSLOduration=1.82603684 podStartE2EDuration="2.294666023s" podCreationTimestamp="2026-02-28 05:08:44 +0000 UTC" firstStartedPulling="2026-02-28 05:08:45.211274161 +0000 UTC m=+2113.881400071" lastFinishedPulling="2026-02-28 05:08:45.679903344 +0000 UTC m=+2114.350029254" observedRunningTime="2026-02-28 05:08:46.291521288 +0000 UTC m=+2114.961647198" watchObservedRunningTime="2026-02-28 05:08:46.294666023 +0000 UTC m=+2114.964791933" Feb 28 05:08:57 crc kubenswrapper[5014]: I0228 05:08:57.460162 5014 scope.go:117] "RemoveContainer" containerID="7f5ba6d85fe609b8c195c55a1b396c19eb252e2a9adc0b331d7bd4f698034a13" Feb 28 05:09:12 crc kubenswrapper[5014]: I0228 05:09:12.581228 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7k2jv"] Feb 28 05:09:12 crc kubenswrapper[5014]: I0228 05:09:12.585871 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7k2jv" Feb 28 05:09:12 crc kubenswrapper[5014]: I0228 05:09:12.599304 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7k2jv"] Feb 28 05:09:12 crc kubenswrapper[5014]: I0228 05:09:12.717338 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c443dde-177f-4479-a7ea-322d6f953691-catalog-content\") pod \"community-operators-7k2jv\" (UID: \"8c443dde-177f-4479-a7ea-322d6f953691\") " pod="openshift-marketplace/community-operators-7k2jv" Feb 28 05:09:12 crc kubenswrapper[5014]: I0228 05:09:12.717708 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c443dde-177f-4479-a7ea-322d6f953691-utilities\") pod \"community-operators-7k2jv\" (UID: \"8c443dde-177f-4479-a7ea-322d6f953691\") " pod="openshift-marketplace/community-operators-7k2jv" Feb 28 05:09:12 crc kubenswrapper[5014]: I0228 05:09:12.717741 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd54s\" (UniqueName: \"kubernetes.io/projected/8c443dde-177f-4479-a7ea-322d6f953691-kube-api-access-pd54s\") pod \"community-operators-7k2jv\" (UID: \"8c443dde-177f-4479-a7ea-322d6f953691\") " pod="openshift-marketplace/community-operators-7k2jv" Feb 28 05:09:12 crc kubenswrapper[5014]: I0228 05:09:12.818921 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c443dde-177f-4479-a7ea-322d6f953691-catalog-content\") pod \"community-operators-7k2jv\" (UID: \"8c443dde-177f-4479-a7ea-322d6f953691\") " pod="openshift-marketplace/community-operators-7k2jv" Feb 28 05:09:12 crc kubenswrapper[5014]: I0228 05:09:12.819019 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c443dde-177f-4479-a7ea-322d6f953691-utilities\") pod \"community-operators-7k2jv\" (UID: \"8c443dde-177f-4479-a7ea-322d6f953691\") " pod="openshift-marketplace/community-operators-7k2jv" Feb 28 05:09:12 crc kubenswrapper[5014]: I0228 05:09:12.819045 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd54s\" (UniqueName: \"kubernetes.io/projected/8c443dde-177f-4479-a7ea-322d6f953691-kube-api-access-pd54s\") pod \"community-operators-7k2jv\" (UID: \"8c443dde-177f-4479-a7ea-322d6f953691\") " pod="openshift-marketplace/community-operators-7k2jv" Feb 28 05:09:12 crc kubenswrapper[5014]: I0228 05:09:12.819571 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c443dde-177f-4479-a7ea-322d6f953691-catalog-content\") pod \"community-operators-7k2jv\" (UID: \"8c443dde-177f-4479-a7ea-322d6f953691\") " pod="openshift-marketplace/community-operators-7k2jv" Feb 28 05:09:12 crc kubenswrapper[5014]: I0228 05:09:12.819571 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c443dde-177f-4479-a7ea-322d6f953691-utilities\") pod \"community-operators-7k2jv\" (UID: \"8c443dde-177f-4479-a7ea-322d6f953691\") " pod="openshift-marketplace/community-operators-7k2jv" Feb 28 05:09:12 crc kubenswrapper[5014]: I0228 05:09:12.839865 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd54s\" (UniqueName: \"kubernetes.io/projected/8c443dde-177f-4479-a7ea-322d6f953691-kube-api-access-pd54s\") pod \"community-operators-7k2jv\" (UID: \"8c443dde-177f-4479-a7ea-322d6f953691\") " pod="openshift-marketplace/community-operators-7k2jv" Feb 28 05:09:12 crc kubenswrapper[5014]: I0228 05:09:12.926311 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7k2jv" Feb 28 05:09:13 crc kubenswrapper[5014]: I0228 05:09:13.490886 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7k2jv"] Feb 28 05:09:13 crc kubenswrapper[5014]: I0228 05:09:13.562692 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7k2jv" event={"ID":"8c443dde-177f-4479-a7ea-322d6f953691","Type":"ContainerStarted","Data":"03739e5523e3cf96914fafe99d9a531c10640af963043030f0f1525962e12f93"} Feb 28 05:09:14 crc kubenswrapper[5014]: I0228 05:09:14.579689 5014 generic.go:334] "Generic (PLEG): container finished" podID="8c443dde-177f-4479-a7ea-322d6f953691" containerID="8e543a2fbddc9cb36c2bc6de356eeff8e8ec4feab5da919b8bed10607d96b610" exitCode=0 Feb 28 05:09:14 crc kubenswrapper[5014]: I0228 05:09:14.579866 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7k2jv" event={"ID":"8c443dde-177f-4479-a7ea-322d6f953691","Type":"ContainerDied","Data":"8e543a2fbddc9cb36c2bc6de356eeff8e8ec4feab5da919b8bed10607d96b610"} Feb 28 05:09:15 crc kubenswrapper[5014]: I0228 05:09:15.707124 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:09:15 crc kubenswrapper[5014]: I0228 05:09:15.707513 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:09:16 crc kubenswrapper[5014]: I0228 05:09:16.599972 5014 generic.go:334] "Generic (PLEG): container finished" podID="8c443dde-177f-4479-a7ea-322d6f953691" containerID="50b36fad9f4f6621163e61ac4c5f7eca523e691c04121e1e6961adc46d6fa298" exitCode=0 Feb 28 05:09:16 crc kubenswrapper[5014]: I0228 05:09:16.600023 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7k2jv" event={"ID":"8c443dde-177f-4479-a7ea-322d6f953691","Type":"ContainerDied","Data":"50b36fad9f4f6621163e61ac4c5f7eca523e691c04121e1e6961adc46d6fa298"} Feb 28 05:09:17 crc kubenswrapper[5014]: I0228 05:09:17.612605 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7k2jv" event={"ID":"8c443dde-177f-4479-a7ea-322d6f953691","Type":"ContainerStarted","Data":"5eec765d9a013c7ae9cf0e770dfae035c9c06dcb274e3f3367379da549e18d8a"} Feb 28 05:09:17 crc kubenswrapper[5014]: I0228 05:09:17.637216 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7k2jv" podStartSLOduration=3.2451049 podStartE2EDuration="5.63719984s" podCreationTimestamp="2026-02-28 05:09:12 +0000 UTC" firstStartedPulling="2026-02-28 05:09:14.582591128 +0000 UTC m=+2143.252717048" lastFinishedPulling="2026-02-28 05:09:16.974686068 +0000 UTC m=+2145.644811988" observedRunningTime="2026-02-28 05:09:17.631440633 +0000 UTC m=+2146.301566573" watchObservedRunningTime="2026-02-28 05:09:17.63719984 +0000 UTC m=+2146.307325750" Feb 28 05:09:22 crc kubenswrapper[5014]: I0228 05:09:22.927154 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7k2jv" Feb 28 05:09:22 crc kubenswrapper[5014]: I0228 05:09:22.927746 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7k2jv" Feb 28 05:09:23 crc kubenswrapper[5014]: I0228 05:09:23.027771 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7k2jv" Feb 28 05:09:23 crc kubenswrapper[5014]: I0228 05:09:23.737786 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7k2jv" Feb 28 05:09:23 crc kubenswrapper[5014]: I0228 05:09:23.786786 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7k2jv"] Feb 28 05:09:25 crc kubenswrapper[5014]: I0228 05:09:25.706426 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7k2jv" podUID="8c443dde-177f-4479-a7ea-322d6f953691" containerName="registry-server" containerID="cri-o://5eec765d9a013c7ae9cf0e770dfae035c9c06dcb274e3f3367379da549e18d8a" gracePeriod=2 Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.199244 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7k2jv" Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.300013 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c443dde-177f-4479-a7ea-322d6f953691-utilities\") pod \"8c443dde-177f-4479-a7ea-322d6f953691\" (UID: \"8c443dde-177f-4479-a7ea-322d6f953691\") " Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.300316 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd54s\" (UniqueName: \"kubernetes.io/projected/8c443dde-177f-4479-a7ea-322d6f953691-kube-api-access-pd54s\") pod \"8c443dde-177f-4479-a7ea-322d6f953691\" (UID: \"8c443dde-177f-4479-a7ea-322d6f953691\") " Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.300389 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c443dde-177f-4479-a7ea-322d6f953691-catalog-content\") pod \"8c443dde-177f-4479-a7ea-322d6f953691\" (UID: \"8c443dde-177f-4479-a7ea-322d6f953691\") " Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.301977 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c443dde-177f-4479-a7ea-322d6f953691-utilities" (OuterVolumeSpecName: "utilities") pod "8c443dde-177f-4479-a7ea-322d6f953691" (UID: "8c443dde-177f-4479-a7ea-322d6f953691"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.307191 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c443dde-177f-4479-a7ea-322d6f953691-kube-api-access-pd54s" (OuterVolumeSpecName: "kube-api-access-pd54s") pod "8c443dde-177f-4479-a7ea-322d6f953691" (UID: "8c443dde-177f-4479-a7ea-322d6f953691"). InnerVolumeSpecName "kube-api-access-pd54s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.403450 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8c443dde-177f-4479-a7ea-322d6f953691-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.403542 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pd54s\" (UniqueName: \"kubernetes.io/projected/8c443dde-177f-4479-a7ea-322d6f953691-kube-api-access-pd54s\") on node \"crc\" DevicePath \"\"" Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.462147 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c443dde-177f-4479-a7ea-322d6f953691-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8c443dde-177f-4479-a7ea-322d6f953691" (UID: "8c443dde-177f-4479-a7ea-322d6f953691"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.505156 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8c443dde-177f-4479-a7ea-322d6f953691-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.717270 5014 generic.go:334] "Generic (PLEG): container finished" podID="8c443dde-177f-4479-a7ea-322d6f953691" containerID="5eec765d9a013c7ae9cf0e770dfae035c9c06dcb274e3f3367379da549e18d8a" exitCode=0 Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.717309 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7k2jv" event={"ID":"8c443dde-177f-4479-a7ea-322d6f953691","Type":"ContainerDied","Data":"5eec765d9a013c7ae9cf0e770dfae035c9c06dcb274e3f3367379da549e18d8a"} Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.717368 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7k2jv" event={"ID":"8c443dde-177f-4479-a7ea-322d6f953691","Type":"ContainerDied","Data":"03739e5523e3cf96914fafe99d9a531c10640af963043030f0f1525962e12f93"} Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.717391 5014 scope.go:117] "RemoveContainer" containerID="5eec765d9a013c7ae9cf0e770dfae035c9c06dcb274e3f3367379da549e18d8a" Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.717423 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7k2jv" Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.747036 5014 scope.go:117] "RemoveContainer" containerID="50b36fad9f4f6621163e61ac4c5f7eca523e691c04121e1e6961adc46d6fa298" Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.774326 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7k2jv"] Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.791244 5014 scope.go:117] "RemoveContainer" containerID="8e543a2fbddc9cb36c2bc6de356eeff8e8ec4feab5da919b8bed10607d96b610" Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.791333 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7k2jv"] Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.841085 5014 scope.go:117] "RemoveContainer" containerID="5eec765d9a013c7ae9cf0e770dfae035c9c06dcb274e3f3367379da549e18d8a" Feb 28 05:09:26 crc kubenswrapper[5014]: E0228 05:09:26.841600 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eec765d9a013c7ae9cf0e770dfae035c9c06dcb274e3f3367379da549e18d8a\": container with ID starting with 5eec765d9a013c7ae9cf0e770dfae035c9c06dcb274e3f3367379da549e18d8a not found: ID does not exist" containerID="5eec765d9a013c7ae9cf0e770dfae035c9c06dcb274e3f3367379da549e18d8a" Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.841654 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eec765d9a013c7ae9cf0e770dfae035c9c06dcb274e3f3367379da549e18d8a"} err="failed to get container status \"5eec765d9a013c7ae9cf0e770dfae035c9c06dcb274e3f3367379da549e18d8a\": rpc error: code = NotFound desc = could not find container \"5eec765d9a013c7ae9cf0e770dfae035c9c06dcb274e3f3367379da549e18d8a\": container with ID starting with 5eec765d9a013c7ae9cf0e770dfae035c9c06dcb274e3f3367379da549e18d8a not found: ID does not exist" Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.841685 5014 scope.go:117] "RemoveContainer" containerID="50b36fad9f4f6621163e61ac4c5f7eca523e691c04121e1e6961adc46d6fa298" Feb 28 05:09:26 crc kubenswrapper[5014]: E0228 05:09:26.842335 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50b36fad9f4f6621163e61ac4c5f7eca523e691c04121e1e6961adc46d6fa298\": container with ID starting with 50b36fad9f4f6621163e61ac4c5f7eca523e691c04121e1e6961adc46d6fa298 not found: ID does not exist" containerID="50b36fad9f4f6621163e61ac4c5f7eca523e691c04121e1e6961adc46d6fa298" Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.842380 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50b36fad9f4f6621163e61ac4c5f7eca523e691c04121e1e6961adc46d6fa298"} err="failed to get container status \"50b36fad9f4f6621163e61ac4c5f7eca523e691c04121e1e6961adc46d6fa298\": rpc error: code = NotFound desc = could not find container \"50b36fad9f4f6621163e61ac4c5f7eca523e691c04121e1e6961adc46d6fa298\": container with ID starting with 50b36fad9f4f6621163e61ac4c5f7eca523e691c04121e1e6961adc46d6fa298 not found: ID does not exist" Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.842411 5014 scope.go:117] "RemoveContainer" containerID="8e543a2fbddc9cb36c2bc6de356eeff8e8ec4feab5da919b8bed10607d96b610" Feb 28 05:09:26 crc kubenswrapper[5014]: E0228 05:09:26.842941 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8e543a2fbddc9cb36c2bc6de356eeff8e8ec4feab5da919b8bed10607d96b610\": container with ID starting with 8e543a2fbddc9cb36c2bc6de356eeff8e8ec4feab5da919b8bed10607d96b610 not found: ID does not exist" containerID="8e543a2fbddc9cb36c2bc6de356eeff8e8ec4feab5da919b8bed10607d96b610" Feb 28 05:09:26 crc kubenswrapper[5014]: I0228 05:09:26.843039 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8e543a2fbddc9cb36c2bc6de356eeff8e8ec4feab5da919b8bed10607d96b610"} err="failed to get container status \"8e543a2fbddc9cb36c2bc6de356eeff8e8ec4feab5da919b8bed10607d96b610\": rpc error: code = NotFound desc = could not find container \"8e543a2fbddc9cb36c2bc6de356eeff8e8ec4feab5da919b8bed10607d96b610\": container with ID starting with 8e543a2fbddc9cb36c2bc6de356eeff8e8ec4feab5da919b8bed10607d96b610 not found: ID does not exist" Feb 28 05:09:28 crc kubenswrapper[5014]: I0228 05:09:28.186957 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c443dde-177f-4479-a7ea-322d6f953691" path="/var/lib/kubelet/pods/8c443dde-177f-4479-a7ea-322d6f953691/volumes" Feb 28 05:09:45 crc kubenswrapper[5014]: I0228 05:09:45.706416 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:09:45 crc kubenswrapper[5014]: I0228 05:09:45.707089 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:09:45 crc kubenswrapper[5014]: I0228 05:09:45.707148 5014 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 05:09:45 crc kubenswrapper[5014]: I0228 05:09:45.708006 5014 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0cf39994ea3bad20406b99bb9d09d57069d0bc9c30b59c1f02196a3ad836f5b7"} pod="openshift-machine-config-operator/machine-config-daemon-cct62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 05:09:45 crc kubenswrapper[5014]: I0228 05:09:45.708069 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" containerID="cri-o://0cf39994ea3bad20406b99bb9d09d57069d0bc9c30b59c1f02196a3ad836f5b7" gracePeriod=600 Feb 28 05:09:45 crc kubenswrapper[5014]: I0228 05:09:45.915739 5014 generic.go:334] "Generic (PLEG): container finished" podID="6aad0009-d904-48f8-8e30-82205907ece1" containerID="0cf39994ea3bad20406b99bb9d09d57069d0bc9c30b59c1f02196a3ad836f5b7" exitCode=0 Feb 28 05:09:45 crc kubenswrapper[5014]: I0228 05:09:45.915867 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerDied","Data":"0cf39994ea3bad20406b99bb9d09d57069d0bc9c30b59c1f02196a3ad836f5b7"} Feb 28 05:09:45 crc kubenswrapper[5014]: I0228 05:09:45.916090 5014 scope.go:117] "RemoveContainer" containerID="831c080a3f614d28f54435ea4566bbc7b3d9ce5cf8da86a40f56d8dfeed0dff0" Feb 28 05:09:46 crc kubenswrapper[5014]: I0228 05:09:46.939304 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerStarted","Data":"1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1"} Feb 28 05:10:00 crc kubenswrapper[5014]: I0228 05:10:00.150380 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537590-hj4pz"] Feb 28 05:10:00 crc kubenswrapper[5014]: E0228 05:10:00.151456 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c443dde-177f-4479-a7ea-322d6f953691" containerName="registry-server" Feb 28 05:10:00 crc kubenswrapper[5014]: I0228 05:10:00.151475 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c443dde-177f-4479-a7ea-322d6f953691" containerName="registry-server" Feb 28 05:10:00 crc kubenswrapper[5014]: E0228 05:10:00.151505 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c443dde-177f-4479-a7ea-322d6f953691" containerName="extract-utilities" Feb 28 05:10:00 crc kubenswrapper[5014]: I0228 05:10:00.151513 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c443dde-177f-4479-a7ea-322d6f953691" containerName="extract-utilities" Feb 28 05:10:00 crc kubenswrapper[5014]: E0228 05:10:00.151541 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c443dde-177f-4479-a7ea-322d6f953691" containerName="extract-content" Feb 28 05:10:00 crc kubenswrapper[5014]: I0228 05:10:00.151548 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c443dde-177f-4479-a7ea-322d6f953691" containerName="extract-content" Feb 28 05:10:00 crc kubenswrapper[5014]: I0228 05:10:00.151760 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c443dde-177f-4479-a7ea-322d6f953691" containerName="registry-server" Feb 28 05:10:00 crc kubenswrapper[5014]: I0228 05:10:00.152590 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537590-hj4pz" Feb 28 05:10:00 crc kubenswrapper[5014]: I0228 05:10:00.154764 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:10:00 crc kubenswrapper[5014]: I0228 05:10:00.156077 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:10:00 crc kubenswrapper[5014]: I0228 05:10:00.156073 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:10:00 crc kubenswrapper[5014]: I0228 05:10:00.163326 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537590-hj4pz"] Feb 28 05:10:00 crc kubenswrapper[5014]: I0228 05:10:00.164589 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrc69\" (UniqueName: \"kubernetes.io/projected/26593845-31ab-4f4a-8386-ad7fc2e8f4f0-kube-api-access-zrc69\") pod \"auto-csr-approver-29537590-hj4pz\" (UID: \"26593845-31ab-4f4a-8386-ad7fc2e8f4f0\") " pod="openshift-infra/auto-csr-approver-29537590-hj4pz" Feb 28 05:10:00 crc kubenswrapper[5014]: I0228 05:10:00.266288 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrc69\" (UniqueName: \"kubernetes.io/projected/26593845-31ab-4f4a-8386-ad7fc2e8f4f0-kube-api-access-zrc69\") pod \"auto-csr-approver-29537590-hj4pz\" (UID: \"26593845-31ab-4f4a-8386-ad7fc2e8f4f0\") " pod="openshift-infra/auto-csr-approver-29537590-hj4pz" Feb 28 05:10:00 crc kubenswrapper[5014]: I0228 05:10:00.291730 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrc69\" (UniqueName: \"kubernetes.io/projected/26593845-31ab-4f4a-8386-ad7fc2e8f4f0-kube-api-access-zrc69\") pod \"auto-csr-approver-29537590-hj4pz\" (UID: \"26593845-31ab-4f4a-8386-ad7fc2e8f4f0\") " pod="openshift-infra/auto-csr-approver-29537590-hj4pz" Feb 28 05:10:00 crc kubenswrapper[5014]: I0228 05:10:00.473188 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537590-hj4pz" Feb 28 05:10:01 crc kubenswrapper[5014]: I0228 05:10:01.031749 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537590-hj4pz"] Feb 28 05:10:01 crc kubenswrapper[5014]: I0228 05:10:01.121615 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537590-hj4pz" event={"ID":"26593845-31ab-4f4a-8386-ad7fc2e8f4f0","Type":"ContainerStarted","Data":"d8a540e6b7af1edcf1e71efec688bb09ebbb6d78fb2e0c8fa9aebeadc0ecb83b"} Feb 28 05:10:03 crc kubenswrapper[5014]: I0228 05:10:03.147223 5014 generic.go:334] "Generic (PLEG): container finished" podID="26593845-31ab-4f4a-8386-ad7fc2e8f4f0" containerID="54aa650ad7a665cd5e8d83bd159021cde4be4d79deca98cc9bbff39463682a94" exitCode=0 Feb 28 05:10:03 crc kubenswrapper[5014]: I0228 05:10:03.147328 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537590-hj4pz" event={"ID":"26593845-31ab-4f4a-8386-ad7fc2e8f4f0","Type":"ContainerDied","Data":"54aa650ad7a665cd5e8d83bd159021cde4be4d79deca98cc9bbff39463682a94"} Feb 28 05:10:04 crc kubenswrapper[5014]: I0228 05:10:04.529141 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537590-hj4pz" Feb 28 05:10:04 crc kubenswrapper[5014]: I0228 05:10:04.693419 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrc69\" (UniqueName: \"kubernetes.io/projected/26593845-31ab-4f4a-8386-ad7fc2e8f4f0-kube-api-access-zrc69\") pod \"26593845-31ab-4f4a-8386-ad7fc2e8f4f0\" (UID: \"26593845-31ab-4f4a-8386-ad7fc2e8f4f0\") " Feb 28 05:10:04 crc kubenswrapper[5014]: I0228 05:10:04.880550 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26593845-31ab-4f4a-8386-ad7fc2e8f4f0-kube-api-access-zrc69" (OuterVolumeSpecName: "kube-api-access-zrc69") pod "26593845-31ab-4f4a-8386-ad7fc2e8f4f0" (UID: "26593845-31ab-4f4a-8386-ad7fc2e8f4f0"). InnerVolumeSpecName "kube-api-access-zrc69". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:10:04 crc kubenswrapper[5014]: I0228 05:10:04.897941 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrc69\" (UniqueName: \"kubernetes.io/projected/26593845-31ab-4f4a-8386-ad7fc2e8f4f0-kube-api-access-zrc69\") on node \"crc\" DevicePath \"\"" Feb 28 05:10:05 crc kubenswrapper[5014]: I0228 05:10:05.173109 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537590-hj4pz" event={"ID":"26593845-31ab-4f4a-8386-ad7fc2e8f4f0","Type":"ContainerDied","Data":"d8a540e6b7af1edcf1e71efec688bb09ebbb6d78fb2e0c8fa9aebeadc0ecb83b"} Feb 28 05:10:05 crc kubenswrapper[5014]: I0228 05:10:05.173148 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8a540e6b7af1edcf1e71efec688bb09ebbb6d78fb2e0c8fa9aebeadc0ecb83b" Feb 28 05:10:05 crc kubenswrapper[5014]: I0228 05:10:05.173226 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537590-hj4pz" Feb 28 05:10:05 crc kubenswrapper[5014]: I0228 05:10:05.594132 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537584-z2t67"] Feb 28 05:10:05 crc kubenswrapper[5014]: I0228 05:10:05.604788 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537584-z2t67"] Feb 28 05:10:06 crc kubenswrapper[5014]: I0228 05:10:06.184540 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b0b5e49-8417-4763-8c0b-18a1d0a3a503" path="/var/lib/kubelet/pods/8b0b5e49-8417-4763-8c0b-18a1d0a3a503/volumes" Feb 28 05:10:57 crc kubenswrapper[5014]: I0228 05:10:57.591910 5014 scope.go:117] "RemoveContainer" containerID="b46cf6daf39ad8d094662e7786edbc88ce70a78a6647328f6a989e68163b54d4" Feb 28 05:11:14 crc kubenswrapper[5014]: I0228 05:11:14.735056 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6xk8j"] Feb 28 05:11:14 crc kubenswrapper[5014]: E0228 05:11:14.736000 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26593845-31ab-4f4a-8386-ad7fc2e8f4f0" containerName="oc" Feb 28 05:11:14 crc kubenswrapper[5014]: I0228 05:11:14.736016 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="26593845-31ab-4f4a-8386-ad7fc2e8f4f0" containerName="oc" Feb 28 05:11:14 crc kubenswrapper[5014]: I0228 05:11:14.736231 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="26593845-31ab-4f4a-8386-ad7fc2e8f4f0" containerName="oc" Feb 28 05:11:14 crc kubenswrapper[5014]: I0228 05:11:14.737569 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xk8j" Feb 28 05:11:14 crc kubenswrapper[5014]: I0228 05:11:14.755455 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xk8j"] Feb 28 05:11:14 crc kubenswrapper[5014]: I0228 05:11:14.915617 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a2b7a9-31f4-48a1-9f6c-2b5716b91916-catalog-content\") pod \"redhat-marketplace-6xk8j\" (UID: \"f1a2b7a9-31f4-48a1-9f6c-2b5716b91916\") " pod="openshift-marketplace/redhat-marketplace-6xk8j" Feb 28 05:11:14 crc kubenswrapper[5014]: I0228 05:11:14.916124 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a2b7a9-31f4-48a1-9f6c-2b5716b91916-utilities\") pod \"redhat-marketplace-6xk8j\" (UID: \"f1a2b7a9-31f4-48a1-9f6c-2b5716b91916\") " pod="openshift-marketplace/redhat-marketplace-6xk8j" Feb 28 05:11:14 crc kubenswrapper[5014]: I0228 05:11:14.916302 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwnxn\" (UniqueName: \"kubernetes.io/projected/f1a2b7a9-31f4-48a1-9f6c-2b5716b91916-kube-api-access-hwnxn\") pod \"redhat-marketplace-6xk8j\" (UID: \"f1a2b7a9-31f4-48a1-9f6c-2b5716b91916\") " pod="openshift-marketplace/redhat-marketplace-6xk8j" Feb 28 05:11:15 crc kubenswrapper[5014]: I0228 05:11:15.018226 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a2b7a9-31f4-48a1-9f6c-2b5716b91916-utilities\") pod \"redhat-marketplace-6xk8j\" (UID: \"f1a2b7a9-31f4-48a1-9f6c-2b5716b91916\") " pod="openshift-marketplace/redhat-marketplace-6xk8j" Feb 28 05:11:15 crc kubenswrapper[5014]: I0228 05:11:15.018320 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwnxn\" (UniqueName: \"kubernetes.io/projected/f1a2b7a9-31f4-48a1-9f6c-2b5716b91916-kube-api-access-hwnxn\") pod \"redhat-marketplace-6xk8j\" (UID: \"f1a2b7a9-31f4-48a1-9f6c-2b5716b91916\") " pod="openshift-marketplace/redhat-marketplace-6xk8j" Feb 28 05:11:15 crc kubenswrapper[5014]: I0228 05:11:15.018391 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a2b7a9-31f4-48a1-9f6c-2b5716b91916-catalog-content\") pod \"redhat-marketplace-6xk8j\" (UID: \"f1a2b7a9-31f4-48a1-9f6c-2b5716b91916\") " pod="openshift-marketplace/redhat-marketplace-6xk8j" Feb 28 05:11:15 crc kubenswrapper[5014]: I0228 05:11:15.018787 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a2b7a9-31f4-48a1-9f6c-2b5716b91916-utilities\") pod \"redhat-marketplace-6xk8j\" (UID: \"f1a2b7a9-31f4-48a1-9f6c-2b5716b91916\") " pod="openshift-marketplace/redhat-marketplace-6xk8j" Feb 28 05:11:15 crc kubenswrapper[5014]: I0228 05:11:15.018817 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a2b7a9-31f4-48a1-9f6c-2b5716b91916-catalog-content\") pod \"redhat-marketplace-6xk8j\" (UID: \"f1a2b7a9-31f4-48a1-9f6c-2b5716b91916\") " pod="openshift-marketplace/redhat-marketplace-6xk8j" Feb 28 05:11:15 crc kubenswrapper[5014]: I0228 05:11:15.036721 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwnxn\" (UniqueName: \"kubernetes.io/projected/f1a2b7a9-31f4-48a1-9f6c-2b5716b91916-kube-api-access-hwnxn\") pod \"redhat-marketplace-6xk8j\" (UID: \"f1a2b7a9-31f4-48a1-9f6c-2b5716b91916\") " pod="openshift-marketplace/redhat-marketplace-6xk8j" Feb 28 05:11:15 crc kubenswrapper[5014]: I0228 05:11:15.068674 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xk8j" Feb 28 05:11:15 crc kubenswrapper[5014]: I0228 05:11:15.558851 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xk8j"] Feb 28 05:11:15 crc kubenswrapper[5014]: I0228 05:11:15.862850 5014 generic.go:334] "Generic (PLEG): container finished" podID="f1a2b7a9-31f4-48a1-9f6c-2b5716b91916" containerID="57634c1920ce7fb948b49eb7fd91108ec4076a4e0eefff5293ba737a2cf5fe15" exitCode=0 Feb 28 05:11:15 crc kubenswrapper[5014]: I0228 05:11:15.863173 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xk8j" event={"ID":"f1a2b7a9-31f4-48a1-9f6c-2b5716b91916","Type":"ContainerDied","Data":"57634c1920ce7fb948b49eb7fd91108ec4076a4e0eefff5293ba737a2cf5fe15"} Feb 28 05:11:15 crc kubenswrapper[5014]: I0228 05:11:15.863209 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xk8j" event={"ID":"f1a2b7a9-31f4-48a1-9f6c-2b5716b91916","Type":"ContainerStarted","Data":"c5ec169a9dcae677bd461d715ecc2f91189c50bee1b92b5d0e116b6cc883980e"} Feb 28 05:11:15 crc kubenswrapper[5014]: I0228 05:11:15.867006 5014 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 05:11:16 crc kubenswrapper[5014]: I0228 05:11:16.875519 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xk8j" event={"ID":"f1a2b7a9-31f4-48a1-9f6c-2b5716b91916","Type":"ContainerStarted","Data":"dd01ecbd0b50fd2c4b7e121694470706fabaa14f5b64e203faf3a08ea71708fe"} Feb 28 05:11:17 crc kubenswrapper[5014]: I0228 05:11:17.884554 5014 generic.go:334] "Generic (PLEG): container finished" podID="f1a2b7a9-31f4-48a1-9f6c-2b5716b91916" containerID="dd01ecbd0b50fd2c4b7e121694470706fabaa14f5b64e203faf3a08ea71708fe" exitCode=0 Feb 28 05:11:17 crc kubenswrapper[5014]: I0228 05:11:17.884596 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xk8j" event={"ID":"f1a2b7a9-31f4-48a1-9f6c-2b5716b91916","Type":"ContainerDied","Data":"dd01ecbd0b50fd2c4b7e121694470706fabaa14f5b64e203faf3a08ea71708fe"} Feb 28 05:11:18 crc kubenswrapper[5014]: I0228 05:11:18.897563 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xk8j" event={"ID":"f1a2b7a9-31f4-48a1-9f6c-2b5716b91916","Type":"ContainerStarted","Data":"d7dbccf23cbfd43fa7192f12fa3623ec9a3b356865e81ce2b1916daf99fbd5c0"} Feb 28 05:11:18 crc kubenswrapper[5014]: I0228 05:11:18.919462 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6xk8j" podStartSLOduration=2.514732699 podStartE2EDuration="4.919423682s" podCreationTimestamp="2026-02-28 05:11:14 +0000 UTC" firstStartedPulling="2026-02-28 05:11:15.866711482 +0000 UTC m=+2264.536837402" lastFinishedPulling="2026-02-28 05:11:18.271402435 +0000 UTC m=+2266.941528385" observedRunningTime="2026-02-28 05:11:18.916234794 +0000 UTC m=+2267.586360704" watchObservedRunningTime="2026-02-28 05:11:18.919423682 +0000 UTC m=+2267.589549592" Feb 28 05:11:20 crc kubenswrapper[5014]: I0228 05:11:20.317643 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qksdb"] Feb 28 05:11:20 crc kubenswrapper[5014]: I0228 05:11:20.319920 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qksdb" Feb 28 05:11:20 crc kubenswrapper[5014]: I0228 05:11:20.341331 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qksdb"] Feb 28 05:11:20 crc kubenswrapper[5014]: I0228 05:11:20.461246 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqr54\" (UniqueName: \"kubernetes.io/projected/4cfa5be3-c9c1-489f-a868-1836862f7eff-kube-api-access-vqr54\") pod \"certified-operators-qksdb\" (UID: \"4cfa5be3-c9c1-489f-a868-1836862f7eff\") " pod="openshift-marketplace/certified-operators-qksdb" Feb 28 05:11:20 crc kubenswrapper[5014]: I0228 05:11:20.461405 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cfa5be3-c9c1-489f-a868-1836862f7eff-utilities\") pod \"certified-operators-qksdb\" (UID: \"4cfa5be3-c9c1-489f-a868-1836862f7eff\") " pod="openshift-marketplace/certified-operators-qksdb" Feb 28 05:11:20 crc kubenswrapper[5014]: I0228 05:11:20.461431 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cfa5be3-c9c1-489f-a868-1836862f7eff-catalog-content\") pod \"certified-operators-qksdb\" (UID: \"4cfa5be3-c9c1-489f-a868-1836862f7eff\") " pod="openshift-marketplace/certified-operators-qksdb" Feb 28 05:11:20 crc kubenswrapper[5014]: I0228 05:11:20.563868 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cfa5be3-c9c1-489f-a868-1836862f7eff-utilities\") pod \"certified-operators-qksdb\" (UID: \"4cfa5be3-c9c1-489f-a868-1836862f7eff\") " pod="openshift-marketplace/certified-operators-qksdb" Feb 28 05:11:20 crc kubenswrapper[5014]: I0228 05:11:20.563915 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cfa5be3-c9c1-489f-a868-1836862f7eff-catalog-content\") pod \"certified-operators-qksdb\" (UID: \"4cfa5be3-c9c1-489f-a868-1836862f7eff\") " pod="openshift-marketplace/certified-operators-qksdb" Feb 28 05:11:20 crc kubenswrapper[5014]: I0228 05:11:20.564007 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqr54\" (UniqueName: \"kubernetes.io/projected/4cfa5be3-c9c1-489f-a868-1836862f7eff-kube-api-access-vqr54\") pod \"certified-operators-qksdb\" (UID: \"4cfa5be3-c9c1-489f-a868-1836862f7eff\") " pod="openshift-marketplace/certified-operators-qksdb" Feb 28 05:11:20 crc kubenswrapper[5014]: I0228 05:11:20.564827 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cfa5be3-c9c1-489f-a868-1836862f7eff-utilities\") pod \"certified-operators-qksdb\" (UID: \"4cfa5be3-c9c1-489f-a868-1836862f7eff\") " pod="openshift-marketplace/certified-operators-qksdb" Feb 28 05:11:20 crc kubenswrapper[5014]: I0228 05:11:20.565085 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cfa5be3-c9c1-489f-a868-1836862f7eff-catalog-content\") pod \"certified-operators-qksdb\" (UID: \"4cfa5be3-c9c1-489f-a868-1836862f7eff\") " pod="openshift-marketplace/certified-operators-qksdb" Feb 28 05:11:20 crc kubenswrapper[5014]: I0228 05:11:20.610567 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqr54\" (UniqueName: \"kubernetes.io/projected/4cfa5be3-c9c1-489f-a868-1836862f7eff-kube-api-access-vqr54\") pod \"certified-operators-qksdb\" (UID: \"4cfa5be3-c9c1-489f-a868-1836862f7eff\") " pod="openshift-marketplace/certified-operators-qksdb" Feb 28 05:11:20 crc kubenswrapper[5014]: I0228 05:11:20.640234 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qksdb" Feb 28 05:11:21 crc kubenswrapper[5014]: I0228 05:11:21.155030 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qksdb"] Feb 28 05:11:21 crc kubenswrapper[5014]: W0228 05:11:21.160200 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4cfa5be3_c9c1_489f_a868_1836862f7eff.slice/crio-dc0ab5e24970a86086f8cfa65180f96fb85cc408a2662cd0c59aefeeaefbcf40 WatchSource:0}: Error finding container dc0ab5e24970a86086f8cfa65180f96fb85cc408a2662cd0c59aefeeaefbcf40: Status 404 returned error can't find the container with id dc0ab5e24970a86086f8cfa65180f96fb85cc408a2662cd0c59aefeeaefbcf40 Feb 28 05:11:21 crc kubenswrapper[5014]: I0228 05:11:21.925853 5014 generic.go:334] "Generic (PLEG): container finished" podID="4cfa5be3-c9c1-489f-a868-1836862f7eff" containerID="00b1615d003456488db1ece6d75ff3e90c5793019cca388e97d3dde8901d367b" exitCode=0 Feb 28 05:11:21 crc kubenswrapper[5014]: I0228 05:11:21.925949 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qksdb" event={"ID":"4cfa5be3-c9c1-489f-a868-1836862f7eff","Type":"ContainerDied","Data":"00b1615d003456488db1ece6d75ff3e90c5793019cca388e97d3dde8901d367b"} Feb 28 05:11:21 crc kubenswrapper[5014]: I0228 05:11:21.926148 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qksdb" event={"ID":"4cfa5be3-c9c1-489f-a868-1836862f7eff","Type":"ContainerStarted","Data":"dc0ab5e24970a86086f8cfa65180f96fb85cc408a2662cd0c59aefeeaefbcf40"} Feb 28 05:11:22 crc kubenswrapper[5014]: I0228 05:11:22.935365 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qksdb" event={"ID":"4cfa5be3-c9c1-489f-a868-1836862f7eff","Type":"ContainerStarted","Data":"42725bf0dc4737c126d87f17f9270713165f635f6f521527f2963d67e4d9970b"} Feb 28 05:11:23 crc kubenswrapper[5014]: I0228 05:11:23.945884 5014 generic.go:334] "Generic (PLEG): container finished" podID="4cfa5be3-c9c1-489f-a868-1836862f7eff" containerID="42725bf0dc4737c126d87f17f9270713165f635f6f521527f2963d67e4d9970b" exitCode=0 Feb 28 05:11:23 crc kubenswrapper[5014]: I0228 05:11:23.945967 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qksdb" event={"ID":"4cfa5be3-c9c1-489f-a868-1836862f7eff","Type":"ContainerDied","Data":"42725bf0dc4737c126d87f17f9270713165f635f6f521527f2963d67e4d9970b"} Feb 28 05:11:25 crc kubenswrapper[5014]: I0228 05:11:25.069203 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6xk8j" Feb 28 05:11:25 crc kubenswrapper[5014]: I0228 05:11:25.069832 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6xk8j" Feb 28 05:11:25 crc kubenswrapper[5014]: I0228 05:11:25.147526 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6xk8j" Feb 28 05:11:25 crc kubenswrapper[5014]: I0228 05:11:25.986213 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qksdb" event={"ID":"4cfa5be3-c9c1-489f-a868-1836862f7eff","Type":"ContainerStarted","Data":"3015909e02939b71d44cedd74a2e37f221e26cae7ba4062046847d178f9aa046"} Feb 28 05:11:26 crc kubenswrapper[5014]: I0228 05:11:26.003580 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qksdb" podStartSLOduration=2.897945163 podStartE2EDuration="6.003562584s" podCreationTimestamp="2026-02-28 05:11:20 +0000 UTC" firstStartedPulling="2026-02-28 05:11:21.927833233 +0000 UTC m=+2270.597959143" lastFinishedPulling="2026-02-28 05:11:25.033450654 +0000 UTC m=+2273.703576564" observedRunningTime="2026-02-28 05:11:26.002329621 +0000 UTC m=+2274.672455531" watchObservedRunningTime="2026-02-28 05:11:26.003562584 +0000 UTC m=+2274.673688494" Feb 28 05:11:26 crc kubenswrapper[5014]: I0228 05:11:26.039092 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6xk8j" Feb 28 05:11:26 crc kubenswrapper[5014]: I0228 05:11:26.510146 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xk8j"] Feb 28 05:11:28 crc kubenswrapper[5014]: I0228 05:11:28.001664 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6xk8j" podUID="f1a2b7a9-31f4-48a1-9f6c-2b5716b91916" containerName="registry-server" containerID="cri-o://d7dbccf23cbfd43fa7192f12fa3623ec9a3b356865e81ce2b1916daf99fbd5c0" gracePeriod=2 Feb 28 05:11:28 crc kubenswrapper[5014]: I0228 05:11:28.511169 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xk8j" Feb 28 05:11:28 crc kubenswrapper[5014]: I0228 05:11:28.653355 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a2b7a9-31f4-48a1-9f6c-2b5716b91916-utilities\") pod \"f1a2b7a9-31f4-48a1-9f6c-2b5716b91916\" (UID: \"f1a2b7a9-31f4-48a1-9f6c-2b5716b91916\") " Feb 28 05:11:28 crc kubenswrapper[5014]: I0228 05:11:28.654040 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwnxn\" (UniqueName: \"kubernetes.io/projected/f1a2b7a9-31f4-48a1-9f6c-2b5716b91916-kube-api-access-hwnxn\") pod \"f1a2b7a9-31f4-48a1-9f6c-2b5716b91916\" (UID: \"f1a2b7a9-31f4-48a1-9f6c-2b5716b91916\") " Feb 28 05:11:28 crc kubenswrapper[5014]: I0228 05:11:28.654158 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a2b7a9-31f4-48a1-9f6c-2b5716b91916-catalog-content\") pod \"f1a2b7a9-31f4-48a1-9f6c-2b5716b91916\" (UID: \"f1a2b7a9-31f4-48a1-9f6c-2b5716b91916\") " Feb 28 05:11:28 crc kubenswrapper[5014]: I0228 05:11:28.655459 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1a2b7a9-31f4-48a1-9f6c-2b5716b91916-utilities" (OuterVolumeSpecName: "utilities") pod "f1a2b7a9-31f4-48a1-9f6c-2b5716b91916" (UID: "f1a2b7a9-31f4-48a1-9f6c-2b5716b91916"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:11:28 crc kubenswrapper[5014]: I0228 05:11:28.659890 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1a2b7a9-31f4-48a1-9f6c-2b5716b91916-kube-api-access-hwnxn" (OuterVolumeSpecName: "kube-api-access-hwnxn") pod "f1a2b7a9-31f4-48a1-9f6c-2b5716b91916" (UID: "f1a2b7a9-31f4-48a1-9f6c-2b5716b91916"). InnerVolumeSpecName "kube-api-access-hwnxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:11:28 crc kubenswrapper[5014]: I0228 05:11:28.693211 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1a2b7a9-31f4-48a1-9f6c-2b5716b91916-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1a2b7a9-31f4-48a1-9f6c-2b5716b91916" (UID: "f1a2b7a9-31f4-48a1-9f6c-2b5716b91916"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:11:28 crc kubenswrapper[5014]: I0228 05:11:28.757066 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1a2b7a9-31f4-48a1-9f6c-2b5716b91916-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 05:11:28 crc kubenswrapper[5014]: I0228 05:11:28.757130 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwnxn\" (UniqueName: \"kubernetes.io/projected/f1a2b7a9-31f4-48a1-9f6c-2b5716b91916-kube-api-access-hwnxn\") on node \"crc\" DevicePath \"\"" Feb 28 05:11:28 crc kubenswrapper[5014]: I0228 05:11:28.757151 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1a2b7a9-31f4-48a1-9f6c-2b5716b91916-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 05:11:29 crc kubenswrapper[5014]: I0228 05:11:29.020400 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xk8j" event={"ID":"f1a2b7a9-31f4-48a1-9f6c-2b5716b91916","Type":"ContainerDied","Data":"d7dbccf23cbfd43fa7192f12fa3623ec9a3b356865e81ce2b1916daf99fbd5c0"} Feb 28 05:11:29 crc kubenswrapper[5014]: I0228 05:11:29.020427 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xk8j" Feb 28 05:11:29 crc kubenswrapper[5014]: I0228 05:11:29.020410 5014 generic.go:334] "Generic (PLEG): container finished" podID="f1a2b7a9-31f4-48a1-9f6c-2b5716b91916" containerID="d7dbccf23cbfd43fa7192f12fa3623ec9a3b356865e81ce2b1916daf99fbd5c0" exitCode=0 Feb 28 05:11:29 crc kubenswrapper[5014]: I0228 05:11:29.020465 5014 scope.go:117] "RemoveContainer" containerID="d7dbccf23cbfd43fa7192f12fa3623ec9a3b356865e81ce2b1916daf99fbd5c0" Feb 28 05:11:29 crc kubenswrapper[5014]: I0228 05:11:29.020531 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xk8j" event={"ID":"f1a2b7a9-31f4-48a1-9f6c-2b5716b91916","Type":"ContainerDied","Data":"c5ec169a9dcae677bd461d715ecc2f91189c50bee1b92b5d0e116b6cc883980e"} Feb 28 05:11:29 crc kubenswrapper[5014]: I0228 05:11:29.071352 5014 scope.go:117] "RemoveContainer" containerID="dd01ecbd0b50fd2c4b7e121694470706fabaa14f5b64e203faf3a08ea71708fe" Feb 28 05:11:29 crc kubenswrapper[5014]: I0228 05:11:29.078855 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xk8j"] Feb 28 05:11:29 crc kubenswrapper[5014]: I0228 05:11:29.085426 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xk8j"] Feb 28 05:11:29 crc kubenswrapper[5014]: I0228 05:11:29.122276 5014 scope.go:117] "RemoveContainer" containerID="57634c1920ce7fb948b49eb7fd91108ec4076a4e0eefff5293ba737a2cf5fe15" Feb 28 05:11:29 crc kubenswrapper[5014]: I0228 05:11:29.157864 5014 scope.go:117] "RemoveContainer" containerID="d7dbccf23cbfd43fa7192f12fa3623ec9a3b356865e81ce2b1916daf99fbd5c0" Feb 28 05:11:29 crc kubenswrapper[5014]: E0228 05:11:29.158663 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7dbccf23cbfd43fa7192f12fa3623ec9a3b356865e81ce2b1916daf99fbd5c0\": container with ID starting with d7dbccf23cbfd43fa7192f12fa3623ec9a3b356865e81ce2b1916daf99fbd5c0 not found: ID does not exist" containerID="d7dbccf23cbfd43fa7192f12fa3623ec9a3b356865e81ce2b1916daf99fbd5c0" Feb 28 05:11:29 crc kubenswrapper[5014]: I0228 05:11:29.158740 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7dbccf23cbfd43fa7192f12fa3623ec9a3b356865e81ce2b1916daf99fbd5c0"} err="failed to get container status \"d7dbccf23cbfd43fa7192f12fa3623ec9a3b356865e81ce2b1916daf99fbd5c0\": rpc error: code = NotFound desc = could not find container \"d7dbccf23cbfd43fa7192f12fa3623ec9a3b356865e81ce2b1916daf99fbd5c0\": container with ID starting with d7dbccf23cbfd43fa7192f12fa3623ec9a3b356865e81ce2b1916daf99fbd5c0 not found: ID does not exist" Feb 28 05:11:29 crc kubenswrapper[5014]: I0228 05:11:29.158787 5014 scope.go:117] "RemoveContainer" containerID="dd01ecbd0b50fd2c4b7e121694470706fabaa14f5b64e203faf3a08ea71708fe" Feb 28 05:11:29 crc kubenswrapper[5014]: E0228 05:11:29.159227 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd01ecbd0b50fd2c4b7e121694470706fabaa14f5b64e203faf3a08ea71708fe\": container with ID starting with dd01ecbd0b50fd2c4b7e121694470706fabaa14f5b64e203faf3a08ea71708fe not found: ID does not exist" containerID="dd01ecbd0b50fd2c4b7e121694470706fabaa14f5b64e203faf3a08ea71708fe" Feb 28 05:11:29 crc kubenswrapper[5014]: I0228 05:11:29.159287 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd01ecbd0b50fd2c4b7e121694470706fabaa14f5b64e203faf3a08ea71708fe"} err="failed to get container status \"dd01ecbd0b50fd2c4b7e121694470706fabaa14f5b64e203faf3a08ea71708fe\": rpc error: code = NotFound desc = could not find container \"dd01ecbd0b50fd2c4b7e121694470706fabaa14f5b64e203faf3a08ea71708fe\": container with ID starting with dd01ecbd0b50fd2c4b7e121694470706fabaa14f5b64e203faf3a08ea71708fe not found: ID does not exist" Feb 28 05:11:29 crc kubenswrapper[5014]: I0228 05:11:29.159325 5014 scope.go:117] "RemoveContainer" containerID="57634c1920ce7fb948b49eb7fd91108ec4076a4e0eefff5293ba737a2cf5fe15" Feb 28 05:11:29 crc kubenswrapper[5014]: E0228 05:11:29.160470 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57634c1920ce7fb948b49eb7fd91108ec4076a4e0eefff5293ba737a2cf5fe15\": container with ID starting with 57634c1920ce7fb948b49eb7fd91108ec4076a4e0eefff5293ba737a2cf5fe15 not found: ID does not exist" containerID="57634c1920ce7fb948b49eb7fd91108ec4076a4e0eefff5293ba737a2cf5fe15" Feb 28 05:11:29 crc kubenswrapper[5014]: I0228 05:11:29.160535 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57634c1920ce7fb948b49eb7fd91108ec4076a4e0eefff5293ba737a2cf5fe15"} err="failed to get container status \"57634c1920ce7fb948b49eb7fd91108ec4076a4e0eefff5293ba737a2cf5fe15\": rpc error: code = NotFound desc = could not find container \"57634c1920ce7fb948b49eb7fd91108ec4076a4e0eefff5293ba737a2cf5fe15\": container with ID starting with 57634c1920ce7fb948b49eb7fd91108ec4076a4e0eefff5293ba737a2cf5fe15 not found: ID does not exist" Feb 28 05:11:30 crc kubenswrapper[5014]: I0228 05:11:30.191136 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1a2b7a9-31f4-48a1-9f6c-2b5716b91916" path="/var/lib/kubelet/pods/f1a2b7a9-31f4-48a1-9f6c-2b5716b91916/volumes" Feb 28 05:11:30 crc kubenswrapper[5014]: I0228 05:11:30.641851 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qksdb" Feb 28 05:11:30 crc kubenswrapper[5014]: I0228 05:11:30.641905 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qksdb" Feb 28 05:11:30 crc kubenswrapper[5014]: I0228 05:11:30.689583 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qksdb" Feb 28 05:11:31 crc kubenswrapper[5014]: I0228 05:11:31.119508 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qksdb" Feb 28 05:11:31 crc kubenswrapper[5014]: I0228 05:11:31.702092 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qksdb"] Feb 28 05:11:33 crc kubenswrapper[5014]: I0228 05:11:33.059995 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qksdb" podUID="4cfa5be3-c9c1-489f-a868-1836862f7eff" containerName="registry-server" containerID="cri-o://3015909e02939b71d44cedd74a2e37f221e26cae7ba4062046847d178f9aa046" gracePeriod=2 Feb 28 05:11:33 crc kubenswrapper[5014]: I0228 05:11:33.517855 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qksdb" Feb 28 05:11:33 crc kubenswrapper[5014]: I0228 05:11:33.558480 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cfa5be3-c9c1-489f-a868-1836862f7eff-utilities\") pod \"4cfa5be3-c9c1-489f-a868-1836862f7eff\" (UID: \"4cfa5be3-c9c1-489f-a868-1836862f7eff\") " Feb 28 05:11:33 crc kubenswrapper[5014]: I0228 05:11:33.558557 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqr54\" (UniqueName: \"kubernetes.io/projected/4cfa5be3-c9c1-489f-a868-1836862f7eff-kube-api-access-vqr54\") pod \"4cfa5be3-c9c1-489f-a868-1836862f7eff\" (UID: \"4cfa5be3-c9c1-489f-a868-1836862f7eff\") " Feb 28 05:11:33 crc kubenswrapper[5014]: I0228 05:11:33.558612 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cfa5be3-c9c1-489f-a868-1836862f7eff-catalog-content\") pod \"4cfa5be3-c9c1-489f-a868-1836862f7eff\" (UID: \"4cfa5be3-c9c1-489f-a868-1836862f7eff\") " Feb 28 05:11:33 crc kubenswrapper[5014]: I0228 05:11:33.559366 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cfa5be3-c9c1-489f-a868-1836862f7eff-utilities" (OuterVolumeSpecName: "utilities") pod "4cfa5be3-c9c1-489f-a868-1836862f7eff" (UID: "4cfa5be3-c9c1-489f-a868-1836862f7eff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:11:33 crc kubenswrapper[5014]: I0228 05:11:33.565119 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4cfa5be3-c9c1-489f-a868-1836862f7eff-kube-api-access-vqr54" (OuterVolumeSpecName: "kube-api-access-vqr54") pod "4cfa5be3-c9c1-489f-a868-1836862f7eff" (UID: "4cfa5be3-c9c1-489f-a868-1836862f7eff"). InnerVolumeSpecName "kube-api-access-vqr54". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:11:33 crc kubenswrapper[5014]: I0228 05:11:33.615167 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4cfa5be3-c9c1-489f-a868-1836862f7eff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4cfa5be3-c9c1-489f-a868-1836862f7eff" (UID: "4cfa5be3-c9c1-489f-a868-1836862f7eff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:11:33 crc kubenswrapper[5014]: I0228 05:11:33.660949 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vqr54\" (UniqueName: \"kubernetes.io/projected/4cfa5be3-c9c1-489f-a868-1836862f7eff-kube-api-access-vqr54\") on node \"crc\" DevicePath \"\"" Feb 28 05:11:33 crc kubenswrapper[5014]: I0228 05:11:33.660982 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4cfa5be3-c9c1-489f-a868-1836862f7eff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 05:11:33 crc kubenswrapper[5014]: I0228 05:11:33.660993 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4cfa5be3-c9c1-489f-a868-1836862f7eff-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 05:11:34 crc kubenswrapper[5014]: I0228 05:11:34.078829 5014 generic.go:334] "Generic (PLEG): container finished" podID="4cfa5be3-c9c1-489f-a868-1836862f7eff" containerID="3015909e02939b71d44cedd74a2e37f221e26cae7ba4062046847d178f9aa046" exitCode=0 Feb 28 05:11:34 crc kubenswrapper[5014]: I0228 05:11:34.078878 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qksdb" event={"ID":"4cfa5be3-c9c1-489f-a868-1836862f7eff","Type":"ContainerDied","Data":"3015909e02939b71d44cedd74a2e37f221e26cae7ba4062046847d178f9aa046"} Feb 28 05:11:34 crc kubenswrapper[5014]: I0228 05:11:34.078910 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qksdb" event={"ID":"4cfa5be3-c9c1-489f-a868-1836862f7eff","Type":"ContainerDied","Data":"dc0ab5e24970a86086f8cfa65180f96fb85cc408a2662cd0c59aefeeaefbcf40"} Feb 28 05:11:34 crc kubenswrapper[5014]: I0228 05:11:34.078932 5014 scope.go:117] "RemoveContainer" containerID="3015909e02939b71d44cedd74a2e37f221e26cae7ba4062046847d178f9aa046" Feb 28 05:11:34 crc kubenswrapper[5014]: I0228 05:11:34.079067 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qksdb" Feb 28 05:11:34 crc kubenswrapper[5014]: I0228 05:11:34.117778 5014 scope.go:117] "RemoveContainer" containerID="42725bf0dc4737c126d87f17f9270713165f635f6f521527f2963d67e4d9970b" Feb 28 05:11:34 crc kubenswrapper[5014]: I0228 05:11:34.122246 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qksdb"] Feb 28 05:11:34 crc kubenswrapper[5014]: I0228 05:11:34.134012 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qksdb"] Feb 28 05:11:34 crc kubenswrapper[5014]: I0228 05:11:34.167930 5014 scope.go:117] "RemoveContainer" containerID="00b1615d003456488db1ece6d75ff3e90c5793019cca388e97d3dde8901d367b" Feb 28 05:11:34 crc kubenswrapper[5014]: I0228 05:11:34.184770 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4cfa5be3-c9c1-489f-a868-1836862f7eff" path="/var/lib/kubelet/pods/4cfa5be3-c9c1-489f-a868-1836862f7eff/volumes" Feb 28 05:11:34 crc kubenswrapper[5014]: I0228 05:11:34.187479 5014 scope.go:117] "RemoveContainer" containerID="3015909e02939b71d44cedd74a2e37f221e26cae7ba4062046847d178f9aa046" Feb 28 05:11:34 crc kubenswrapper[5014]: E0228 05:11:34.187829 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3015909e02939b71d44cedd74a2e37f221e26cae7ba4062046847d178f9aa046\": container with ID starting with 3015909e02939b71d44cedd74a2e37f221e26cae7ba4062046847d178f9aa046 not found: ID does not exist" containerID="3015909e02939b71d44cedd74a2e37f221e26cae7ba4062046847d178f9aa046" Feb 28 05:11:34 crc kubenswrapper[5014]: I0228 05:11:34.187867 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3015909e02939b71d44cedd74a2e37f221e26cae7ba4062046847d178f9aa046"} err="failed to get container status \"3015909e02939b71d44cedd74a2e37f221e26cae7ba4062046847d178f9aa046\": rpc error: code = NotFound desc = could not find container \"3015909e02939b71d44cedd74a2e37f221e26cae7ba4062046847d178f9aa046\": container with ID starting with 3015909e02939b71d44cedd74a2e37f221e26cae7ba4062046847d178f9aa046 not found: ID does not exist" Feb 28 05:11:34 crc kubenswrapper[5014]: I0228 05:11:34.187888 5014 scope.go:117] "RemoveContainer" containerID="42725bf0dc4737c126d87f17f9270713165f635f6f521527f2963d67e4d9970b" Feb 28 05:11:34 crc kubenswrapper[5014]: E0228 05:11:34.188088 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42725bf0dc4737c126d87f17f9270713165f635f6f521527f2963d67e4d9970b\": container with ID starting with 42725bf0dc4737c126d87f17f9270713165f635f6f521527f2963d67e4d9970b not found: ID does not exist" containerID="42725bf0dc4737c126d87f17f9270713165f635f6f521527f2963d67e4d9970b" Feb 28 05:11:34 crc kubenswrapper[5014]: I0228 05:11:34.188114 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42725bf0dc4737c126d87f17f9270713165f635f6f521527f2963d67e4d9970b"} err="failed to get container status \"42725bf0dc4737c126d87f17f9270713165f635f6f521527f2963d67e4d9970b\": rpc error: code = NotFound desc = could not find container \"42725bf0dc4737c126d87f17f9270713165f635f6f521527f2963d67e4d9970b\": container with ID starting with 42725bf0dc4737c126d87f17f9270713165f635f6f521527f2963d67e4d9970b not found: ID does not exist" Feb 28 05:11:34 crc kubenswrapper[5014]: I0228 05:11:34.188130 5014 scope.go:117] "RemoveContainer" containerID="00b1615d003456488db1ece6d75ff3e90c5793019cca388e97d3dde8901d367b" Feb 28 05:11:34 crc kubenswrapper[5014]: E0228 05:11:34.188309 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00b1615d003456488db1ece6d75ff3e90c5793019cca388e97d3dde8901d367b\": container with ID starting with 00b1615d003456488db1ece6d75ff3e90c5793019cca388e97d3dde8901d367b not found: ID does not exist" containerID="00b1615d003456488db1ece6d75ff3e90c5793019cca388e97d3dde8901d367b" Feb 28 05:11:34 crc kubenswrapper[5014]: I0228 05:11:34.188334 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00b1615d003456488db1ece6d75ff3e90c5793019cca388e97d3dde8901d367b"} err="failed to get container status \"00b1615d003456488db1ece6d75ff3e90c5793019cca388e97d3dde8901d367b\": rpc error: code = NotFound desc = could not find container \"00b1615d003456488db1ece6d75ff3e90c5793019cca388e97d3dde8901d367b\": container with ID starting with 00b1615d003456488db1ece6d75ff3e90c5793019cca388e97d3dde8901d367b not found: ID does not exist" Feb 28 05:12:00 crc kubenswrapper[5014]: I0228 05:12:00.153765 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537592-kttmq"] Feb 28 05:12:00 crc kubenswrapper[5014]: E0228 05:12:00.154999 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a2b7a9-31f4-48a1-9f6c-2b5716b91916" containerName="extract-utilities" Feb 28 05:12:00 crc kubenswrapper[5014]: I0228 05:12:00.155024 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a2b7a9-31f4-48a1-9f6c-2b5716b91916" containerName="extract-utilities" Feb 28 05:12:00 crc kubenswrapper[5014]: E0228 05:12:00.155051 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a2b7a9-31f4-48a1-9f6c-2b5716b91916" containerName="extract-content" Feb 28 05:12:00 crc kubenswrapper[5014]: I0228 05:12:00.155063 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a2b7a9-31f4-48a1-9f6c-2b5716b91916" containerName="extract-content" Feb 28 05:12:00 crc kubenswrapper[5014]: E0228 05:12:00.155080 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1a2b7a9-31f4-48a1-9f6c-2b5716b91916" containerName="registry-server" Feb 28 05:12:00 crc kubenswrapper[5014]: I0228 05:12:00.155090 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1a2b7a9-31f4-48a1-9f6c-2b5716b91916" containerName="registry-server" Feb 28 05:12:00 crc kubenswrapper[5014]: E0228 05:12:00.155114 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cfa5be3-c9c1-489f-a868-1836862f7eff" containerName="extract-content" Feb 28 05:12:00 crc kubenswrapper[5014]: I0228 05:12:00.155125 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cfa5be3-c9c1-489f-a868-1836862f7eff" containerName="extract-content" Feb 28 05:12:00 crc kubenswrapper[5014]: E0228 05:12:00.155157 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cfa5be3-c9c1-489f-a868-1836862f7eff" containerName="extract-utilities" Feb 28 05:12:00 crc kubenswrapper[5014]: I0228 05:12:00.155170 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cfa5be3-c9c1-489f-a868-1836862f7eff" containerName="extract-utilities" Feb 28 05:12:00 crc kubenswrapper[5014]: E0228 05:12:00.155198 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4cfa5be3-c9c1-489f-a868-1836862f7eff" containerName="registry-server" Feb 28 05:12:00 crc kubenswrapper[5014]: I0228 05:12:00.155208 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="4cfa5be3-c9c1-489f-a868-1836862f7eff" containerName="registry-server" Feb 28 05:12:00 crc kubenswrapper[5014]: I0228 05:12:00.155512 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1a2b7a9-31f4-48a1-9f6c-2b5716b91916" containerName="registry-server" Feb 28 05:12:00 crc kubenswrapper[5014]: I0228 05:12:00.155542 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="4cfa5be3-c9c1-489f-a868-1836862f7eff" containerName="registry-server" Feb 28 05:12:00 crc kubenswrapper[5014]: I0228 05:12:00.156483 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537592-kttmq" Feb 28 05:12:00 crc kubenswrapper[5014]: I0228 05:12:00.162636 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537592-kttmq"] Feb 28 05:12:00 crc kubenswrapper[5014]: I0228 05:12:00.193073 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:12:00 crc kubenswrapper[5014]: I0228 05:12:00.193479 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:12:00 crc kubenswrapper[5014]: I0228 05:12:00.193641 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:12:00 crc kubenswrapper[5014]: I0228 05:12:00.329841 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4fvc\" (UniqueName: \"kubernetes.io/projected/8d9ab98e-b395-4b7f-b52a-976c3a333c37-kube-api-access-k4fvc\") pod \"auto-csr-approver-29537592-kttmq\" (UID: \"8d9ab98e-b395-4b7f-b52a-976c3a333c37\") " pod="openshift-infra/auto-csr-approver-29537592-kttmq" Feb 28 05:12:00 crc kubenswrapper[5014]: I0228 05:12:00.431759 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k4fvc\" (UniqueName: \"kubernetes.io/projected/8d9ab98e-b395-4b7f-b52a-976c3a333c37-kube-api-access-k4fvc\") pod \"auto-csr-approver-29537592-kttmq\" (UID: \"8d9ab98e-b395-4b7f-b52a-976c3a333c37\") " pod="openshift-infra/auto-csr-approver-29537592-kttmq" Feb 28 05:12:00 crc kubenswrapper[5014]: I0228 05:12:00.449955 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4fvc\" (UniqueName: \"kubernetes.io/projected/8d9ab98e-b395-4b7f-b52a-976c3a333c37-kube-api-access-k4fvc\") pod \"auto-csr-approver-29537592-kttmq\" (UID: \"8d9ab98e-b395-4b7f-b52a-976c3a333c37\") " pod="openshift-infra/auto-csr-approver-29537592-kttmq" Feb 28 05:12:00 crc kubenswrapper[5014]: I0228 05:12:00.508163 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537592-kttmq" Feb 28 05:12:00 crc kubenswrapper[5014]: I0228 05:12:00.951278 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537592-kttmq"] Feb 28 05:12:01 crc kubenswrapper[5014]: I0228 05:12:01.339654 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537592-kttmq" event={"ID":"8d9ab98e-b395-4b7f-b52a-976c3a333c37","Type":"ContainerStarted","Data":"a32b0846dea8677ee863cb30f0cb6a9eafcb8c56970227ea797ab07818f0160f"} Feb 28 05:12:02 crc kubenswrapper[5014]: I0228 05:12:02.350087 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537592-kttmq" event={"ID":"8d9ab98e-b395-4b7f-b52a-976c3a333c37","Type":"ContainerStarted","Data":"3f6f897728b5304a38a0863fe7ef1e0c1c337da6dda268fb4e5c239c5f60a962"} Feb 28 05:12:02 crc kubenswrapper[5014]: I0228 05:12:02.371575 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537592-kttmq" podStartSLOduration=1.431457773 podStartE2EDuration="2.371553546s" podCreationTimestamp="2026-02-28 05:12:00 +0000 UTC" firstStartedPulling="2026-02-28 05:12:00.956255942 +0000 UTC m=+2309.626381852" lastFinishedPulling="2026-02-28 05:12:01.896351705 +0000 UTC m=+2310.566477625" observedRunningTime="2026-02-28 05:12:02.364156786 +0000 UTC m=+2311.034282716" watchObservedRunningTime="2026-02-28 05:12:02.371553546 +0000 UTC m=+2311.041679456" Feb 28 05:12:03 crc kubenswrapper[5014]: I0228 05:12:03.360492 5014 generic.go:334] "Generic (PLEG): container finished" podID="8d9ab98e-b395-4b7f-b52a-976c3a333c37" containerID="3f6f897728b5304a38a0863fe7ef1e0c1c337da6dda268fb4e5c239c5f60a962" exitCode=0 Feb 28 05:12:03 crc kubenswrapper[5014]: I0228 05:12:03.360541 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537592-kttmq" event={"ID":"8d9ab98e-b395-4b7f-b52a-976c3a333c37","Type":"ContainerDied","Data":"3f6f897728b5304a38a0863fe7ef1e0c1c337da6dda268fb4e5c239c5f60a962"} Feb 28 05:12:04 crc kubenswrapper[5014]: I0228 05:12:04.703768 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537592-kttmq" Feb 28 05:12:04 crc kubenswrapper[5014]: I0228 05:12:04.817909 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4fvc\" (UniqueName: \"kubernetes.io/projected/8d9ab98e-b395-4b7f-b52a-976c3a333c37-kube-api-access-k4fvc\") pod \"8d9ab98e-b395-4b7f-b52a-976c3a333c37\" (UID: \"8d9ab98e-b395-4b7f-b52a-976c3a333c37\") " Feb 28 05:12:04 crc kubenswrapper[5014]: I0228 05:12:04.825548 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d9ab98e-b395-4b7f-b52a-976c3a333c37-kube-api-access-k4fvc" (OuterVolumeSpecName: "kube-api-access-k4fvc") pod "8d9ab98e-b395-4b7f-b52a-976c3a333c37" (UID: "8d9ab98e-b395-4b7f-b52a-976c3a333c37"). InnerVolumeSpecName "kube-api-access-k4fvc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:12:04 crc kubenswrapper[5014]: I0228 05:12:04.920202 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k4fvc\" (UniqueName: \"kubernetes.io/projected/8d9ab98e-b395-4b7f-b52a-976c3a333c37-kube-api-access-k4fvc\") on node \"crc\" DevicePath \"\"" Feb 28 05:12:05 crc kubenswrapper[5014]: I0228 05:12:05.255200 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537586-dmm99"] Feb 28 05:12:05 crc kubenswrapper[5014]: I0228 05:12:05.266125 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537586-dmm99"] Feb 28 05:12:05 crc kubenswrapper[5014]: I0228 05:12:05.379460 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537592-kttmq" event={"ID":"8d9ab98e-b395-4b7f-b52a-976c3a333c37","Type":"ContainerDied","Data":"a32b0846dea8677ee863cb30f0cb6a9eafcb8c56970227ea797ab07818f0160f"} Feb 28 05:12:05 crc kubenswrapper[5014]: I0228 05:12:05.379517 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a32b0846dea8677ee863cb30f0cb6a9eafcb8c56970227ea797ab07818f0160f" Feb 28 05:12:05 crc kubenswrapper[5014]: I0228 05:12:05.379550 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537592-kttmq" Feb 28 05:12:06 crc kubenswrapper[5014]: I0228 05:12:06.190566 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9991fa5e-8673-41c7-8061-a43c23654a6b" path="/var/lib/kubelet/pods/9991fa5e-8673-41c7-8061-a43c23654a6b/volumes" Feb 28 05:12:15 crc kubenswrapper[5014]: I0228 05:12:15.706679 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:12:15 crc kubenswrapper[5014]: I0228 05:12:15.707714 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:12:23 crc kubenswrapper[5014]: I0228 05:12:23.549960 5014 generic.go:334] "Generic (PLEG): container finished" podID="85e8a1f1-6f8c-4af8-9273-dc37192bea6a" containerID="d1e0f391c6c0586332cddd21ebff8a8123f1f3dbe81cd0f1c9e7c064e7b8c1b1" exitCode=0 Feb 28 05:12:23 crc kubenswrapper[5014]: I0228 05:12:23.550125 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6" event={"ID":"85e8a1f1-6f8c-4af8-9273-dc37192bea6a","Type":"ContainerDied","Data":"d1e0f391c6c0586332cddd21ebff8a8123f1f3dbe81cd0f1c9e7c064e7b8c1b1"} Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.038562 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.167790 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl6pq\" (UniqueName: \"kubernetes.io/projected/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-kube-api-access-kl6pq\") pod \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\" (UID: \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\") " Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.167889 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-libvirt-combined-ca-bundle\") pod \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\" (UID: \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\") " Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.167931 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-ssh-key-openstack-edpm-ipam\") pod \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\" (UID: \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\") " Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.167993 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-libvirt-secret-0\") pod \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\" (UID: \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\") " Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.168096 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-inventory\") pod \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\" (UID: \"85e8a1f1-6f8c-4af8-9273-dc37192bea6a\") " Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.172925 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "85e8a1f1-6f8c-4af8-9273-dc37192bea6a" (UID: "85e8a1f1-6f8c-4af8-9273-dc37192bea6a"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.175909 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-kube-api-access-kl6pq" (OuterVolumeSpecName: "kube-api-access-kl6pq") pod "85e8a1f1-6f8c-4af8-9273-dc37192bea6a" (UID: "85e8a1f1-6f8c-4af8-9273-dc37192bea6a"). InnerVolumeSpecName "kube-api-access-kl6pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.208885 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "85e8a1f1-6f8c-4af8-9273-dc37192bea6a" (UID: "85e8a1f1-6f8c-4af8-9273-dc37192bea6a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.210775 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-inventory" (OuterVolumeSpecName: "inventory") pod "85e8a1f1-6f8c-4af8-9273-dc37192bea6a" (UID: "85e8a1f1-6f8c-4af8-9273-dc37192bea6a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.213094 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "85e8a1f1-6f8c-4af8-9273-dc37192bea6a" (UID: "85e8a1f1-6f8c-4af8-9273-dc37192bea6a"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.271326 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl6pq\" (UniqueName: \"kubernetes.io/projected/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-kube-api-access-kl6pq\") on node \"crc\" DevicePath \"\"" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.271361 5014 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.271372 5014 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.271380 5014 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.271390 5014 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85e8a1f1-6f8c-4af8-9273-dc37192bea6a-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.572468 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6" event={"ID":"85e8a1f1-6f8c-4af8-9273-dc37192bea6a","Type":"ContainerDied","Data":"4c3169a42a5d1e1819528461d7b53e2ed60f912d2def102586a74662cee37f96"} Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.572507 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c3169a42a5d1e1819528461d7b53e2ed60f912d2def102586a74662cee37f96" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.572558 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.679167 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s"] Feb 28 05:12:25 crc kubenswrapper[5014]: E0228 05:12:25.679538 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85e8a1f1-6f8c-4af8-9273-dc37192bea6a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.679555 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="85e8a1f1-6f8c-4af8-9273-dc37192bea6a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 28 05:12:25 crc kubenswrapper[5014]: E0228 05:12:25.679568 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d9ab98e-b395-4b7f-b52a-976c3a333c37" containerName="oc" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.679574 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d9ab98e-b395-4b7f-b52a-976c3a333c37" containerName="oc" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.679760 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d9ab98e-b395-4b7f-b52a-976c3a333c37" containerName="oc" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.679777 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="85e8a1f1-6f8c-4af8-9273-dc37192bea6a" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.680437 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.683850 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.684411 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.684647 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.684896 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.686031 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.686362 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.686687 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6dz6b" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.695935 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s"] Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.780473 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.780628 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.780716 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.780775 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.780862 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.780932 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b2cec974-8eb2-428d-8c59-97af37993f91-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.781019 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.781141 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.781271 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.781413 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5ptq\" (UniqueName: \"kubernetes.io/projected/b2cec974-8eb2-428d-8c59-97af37993f91-kube-api-access-n5ptq\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.781554 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.882772 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.883197 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.883269 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.883325 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b2cec974-8eb2-428d-8c59-97af37993f91-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.883392 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.884917 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b2cec974-8eb2-428d-8c59-97af37993f91-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.885002 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.885087 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.885158 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5ptq\" (UniqueName: \"kubernetes.io/projected/b2cec974-8eb2-428d-8c59-97af37993f91-kube-api-access-n5ptq\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.885197 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.885315 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.885420 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.890227 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.890490 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.890501 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.891586 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.894359 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.894928 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.895347 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.899999 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.902230 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:25 crc kubenswrapper[5014]: I0228 05:12:25.902948 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5ptq\" (UniqueName: \"kubernetes.io/projected/b2cec974-8eb2-428d-8c59-97af37993f91-kube-api-access-n5ptq\") pod \"nova-edpm-deployment-openstack-edpm-ipam-62n2s\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:26 crc kubenswrapper[5014]: I0228 05:12:26.016647 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:12:26 crc kubenswrapper[5014]: I0228 05:12:26.592165 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s"] Feb 28 05:12:27 crc kubenswrapper[5014]: I0228 05:12:27.596852 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" event={"ID":"b2cec974-8eb2-428d-8c59-97af37993f91","Type":"ContainerStarted","Data":"331a664f1fa6410ebf0477f66a7810100192cd8a372ca1406a0ea29de00b9571"} Feb 28 05:12:27 crc kubenswrapper[5014]: I0228 05:12:27.597467 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" event={"ID":"b2cec974-8eb2-428d-8c59-97af37993f91","Type":"ContainerStarted","Data":"bb527299734737f843abc235983863560f018bff121678593d3d333722ef7d04"} Feb 28 05:12:45 crc kubenswrapper[5014]: I0228 05:12:45.706997 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:12:45 crc kubenswrapper[5014]: I0228 05:12:45.707597 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:12:57 crc kubenswrapper[5014]: I0228 05:12:57.735775 5014 scope.go:117] "RemoveContainer" containerID="5676269a0cd8da3c375f1ae3e8e646559243e08af52ed02066259e681770b2e3" Feb 28 05:13:15 crc kubenswrapper[5014]: I0228 05:13:15.706384 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:13:15 crc kubenswrapper[5014]: I0228 05:13:15.707165 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:13:15 crc kubenswrapper[5014]: I0228 05:13:15.707232 5014 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 05:13:15 crc kubenswrapper[5014]: I0228 05:13:15.708344 5014 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1"} pod="openshift-machine-config-operator/machine-config-daemon-cct62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 05:13:15 crc kubenswrapper[5014]: I0228 05:13:15.708471 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" containerID="cri-o://1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" gracePeriod=600 Feb 28 05:13:16 crc kubenswrapper[5014]: I0228 05:13:16.133998 5014 generic.go:334] "Generic (PLEG): container finished" podID="6aad0009-d904-48f8-8e30-82205907ece1" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" exitCode=0 Feb 28 05:13:16 crc kubenswrapper[5014]: I0228 05:13:16.134077 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerDied","Data":"1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1"} Feb 28 05:13:16 crc kubenswrapper[5014]: I0228 05:13:16.134342 5014 scope.go:117] "RemoveContainer" containerID="0cf39994ea3bad20406b99bb9d09d57069d0bc9c30b59c1f02196a3ad836f5b7" Feb 28 05:13:16 crc kubenswrapper[5014]: E0228 05:13:16.340473 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:13:17 crc kubenswrapper[5014]: I0228 05:13:17.146240 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:13:17 crc kubenswrapper[5014]: E0228 05:13:17.146894 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:13:17 crc kubenswrapper[5014]: I0228 05:13:17.180099 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" podStartSLOduration=51.637625363 podStartE2EDuration="52.180072846s" podCreationTimestamp="2026-02-28 05:12:25 +0000 UTC" firstStartedPulling="2026-02-28 05:12:26.590766644 +0000 UTC m=+2335.260892564" lastFinishedPulling="2026-02-28 05:12:27.133214127 +0000 UTC m=+2335.803340047" observedRunningTime="2026-02-28 05:12:27.620630842 +0000 UTC m=+2336.290756762" watchObservedRunningTime="2026-02-28 05:13:17.180072846 +0000 UTC m=+2385.850198776" Feb 28 05:13:28 crc kubenswrapper[5014]: I0228 05:13:28.171869 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:13:28 crc kubenswrapper[5014]: E0228 05:13:28.173029 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:13:39 crc kubenswrapper[5014]: I0228 05:13:39.171240 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:13:39 crc kubenswrapper[5014]: E0228 05:13:39.172153 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:13:52 crc kubenswrapper[5014]: I0228 05:13:52.179997 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:13:52 crc kubenswrapper[5014]: E0228 05:13:52.180676 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:14:00 crc kubenswrapper[5014]: I0228 05:14:00.144843 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537594-99x2c"] Feb 28 05:14:00 crc kubenswrapper[5014]: I0228 05:14:00.146574 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537594-99x2c" Feb 28 05:14:00 crc kubenswrapper[5014]: I0228 05:14:00.148669 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:14:00 crc kubenswrapper[5014]: I0228 05:14:00.149128 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:14:00 crc kubenswrapper[5014]: I0228 05:14:00.150047 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:14:00 crc kubenswrapper[5014]: I0228 05:14:00.159220 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537594-99x2c"] Feb 28 05:14:00 crc kubenswrapper[5014]: I0228 05:14:00.273988 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z72g\" (UniqueName: \"kubernetes.io/projected/9b5e4216-03d6-4fc8-93b1-9eb5cafbfc5e-kube-api-access-2z72g\") pod \"auto-csr-approver-29537594-99x2c\" (UID: \"9b5e4216-03d6-4fc8-93b1-9eb5cafbfc5e\") " pod="openshift-infra/auto-csr-approver-29537594-99x2c" Feb 28 05:14:00 crc kubenswrapper[5014]: I0228 05:14:00.376050 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z72g\" (UniqueName: \"kubernetes.io/projected/9b5e4216-03d6-4fc8-93b1-9eb5cafbfc5e-kube-api-access-2z72g\") pod \"auto-csr-approver-29537594-99x2c\" (UID: \"9b5e4216-03d6-4fc8-93b1-9eb5cafbfc5e\") " pod="openshift-infra/auto-csr-approver-29537594-99x2c" Feb 28 05:14:00 crc kubenswrapper[5014]: I0228 05:14:00.398590 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z72g\" (UniqueName: \"kubernetes.io/projected/9b5e4216-03d6-4fc8-93b1-9eb5cafbfc5e-kube-api-access-2z72g\") pod \"auto-csr-approver-29537594-99x2c\" (UID: \"9b5e4216-03d6-4fc8-93b1-9eb5cafbfc5e\") " pod="openshift-infra/auto-csr-approver-29537594-99x2c" Feb 28 05:14:00 crc kubenswrapper[5014]: I0228 05:14:00.469616 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537594-99x2c" Feb 28 05:14:00 crc kubenswrapper[5014]: W0228 05:14:00.915011 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b5e4216_03d6_4fc8_93b1_9eb5cafbfc5e.slice/crio-1e3272bde3cb9d663f6889449238bc67c7e8fac0c059564fe9b737b9459e1b9b WatchSource:0}: Error finding container 1e3272bde3cb9d663f6889449238bc67c7e8fac0c059564fe9b737b9459e1b9b: Status 404 returned error can't find the container with id 1e3272bde3cb9d663f6889449238bc67c7e8fac0c059564fe9b737b9459e1b9b Feb 28 05:14:00 crc kubenswrapper[5014]: I0228 05:14:00.917209 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537594-99x2c"] Feb 28 05:14:01 crc kubenswrapper[5014]: I0228 05:14:01.621864 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537594-99x2c" event={"ID":"9b5e4216-03d6-4fc8-93b1-9eb5cafbfc5e","Type":"ContainerStarted","Data":"1e3272bde3cb9d663f6889449238bc67c7e8fac0c059564fe9b737b9459e1b9b"} Feb 28 05:14:02 crc kubenswrapper[5014]: I0228 05:14:02.635138 5014 generic.go:334] "Generic (PLEG): container finished" podID="9b5e4216-03d6-4fc8-93b1-9eb5cafbfc5e" containerID="0972b63af0684bbd7174214e5fdb8ae552e79cf1735d152803f41db87e79e5a2" exitCode=0 Feb 28 05:14:02 crc kubenswrapper[5014]: I0228 05:14:02.635336 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537594-99x2c" event={"ID":"9b5e4216-03d6-4fc8-93b1-9eb5cafbfc5e","Type":"ContainerDied","Data":"0972b63af0684bbd7174214e5fdb8ae552e79cf1735d152803f41db87e79e5a2"} Feb 28 05:14:03 crc kubenswrapper[5014]: I0228 05:14:03.173490 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:14:03 crc kubenswrapper[5014]: E0228 05:14:03.174539 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:14:03 crc kubenswrapper[5014]: I0228 05:14:03.995400 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537594-99x2c" Feb 28 05:14:04 crc kubenswrapper[5014]: I0228 05:14:04.153018 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z72g\" (UniqueName: \"kubernetes.io/projected/9b5e4216-03d6-4fc8-93b1-9eb5cafbfc5e-kube-api-access-2z72g\") pod \"9b5e4216-03d6-4fc8-93b1-9eb5cafbfc5e\" (UID: \"9b5e4216-03d6-4fc8-93b1-9eb5cafbfc5e\") " Feb 28 05:14:04 crc kubenswrapper[5014]: I0228 05:14:04.160158 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b5e4216-03d6-4fc8-93b1-9eb5cafbfc5e-kube-api-access-2z72g" (OuterVolumeSpecName: "kube-api-access-2z72g") pod "9b5e4216-03d6-4fc8-93b1-9eb5cafbfc5e" (UID: "9b5e4216-03d6-4fc8-93b1-9eb5cafbfc5e"). InnerVolumeSpecName "kube-api-access-2z72g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:14:04 crc kubenswrapper[5014]: I0228 05:14:04.255437 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z72g\" (UniqueName: \"kubernetes.io/projected/9b5e4216-03d6-4fc8-93b1-9eb5cafbfc5e-kube-api-access-2z72g\") on node \"crc\" DevicePath \"\"" Feb 28 05:14:04 crc kubenswrapper[5014]: I0228 05:14:04.658779 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537594-99x2c" event={"ID":"9b5e4216-03d6-4fc8-93b1-9eb5cafbfc5e","Type":"ContainerDied","Data":"1e3272bde3cb9d663f6889449238bc67c7e8fac0c059564fe9b737b9459e1b9b"} Feb 28 05:14:04 crc kubenswrapper[5014]: I0228 05:14:04.658891 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e3272bde3cb9d663f6889449238bc67c7e8fac0c059564fe9b737b9459e1b9b" Feb 28 05:14:04 crc kubenswrapper[5014]: I0228 05:14:04.658978 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537594-99x2c" Feb 28 05:14:05 crc kubenswrapper[5014]: I0228 05:14:05.078256 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537588-z6fj9"] Feb 28 05:14:05 crc kubenswrapper[5014]: I0228 05:14:05.085466 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537588-z6fj9"] Feb 28 05:14:06 crc kubenswrapper[5014]: I0228 05:14:06.185005 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43276879-eb1e-4f8d-929d-30c2d43663cb" path="/var/lib/kubelet/pods/43276879-eb1e-4f8d-929d-30c2d43663cb/volumes" Feb 28 05:14:14 crc kubenswrapper[5014]: I0228 05:14:14.173064 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:14:14 crc kubenswrapper[5014]: E0228 05:14:14.174033 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:14:27 crc kubenswrapper[5014]: I0228 05:14:27.172710 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:14:27 crc kubenswrapper[5014]: E0228 05:14:27.173982 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:14:40 crc kubenswrapper[5014]: I0228 05:14:40.172771 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:14:40 crc kubenswrapper[5014]: E0228 05:14:40.173980 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:14:43 crc kubenswrapper[5014]: I0228 05:14:43.014562 5014 generic.go:334] "Generic (PLEG): container finished" podID="b2cec974-8eb2-428d-8c59-97af37993f91" containerID="331a664f1fa6410ebf0477f66a7810100192cd8a372ca1406a0ea29de00b9571" exitCode=0 Feb 28 05:14:43 crc kubenswrapper[5014]: I0228 05:14:43.014670 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" event={"ID":"b2cec974-8eb2-428d-8c59-97af37993f91","Type":"ContainerDied","Data":"331a664f1fa6410ebf0477f66a7810100192cd8a372ca1406a0ea29de00b9571"} Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.431504 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.484113 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-migration-ssh-key-1\") pod \"b2cec974-8eb2-428d-8c59-97af37993f91\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.484158 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-cell1-compute-config-1\") pod \"b2cec974-8eb2-428d-8c59-97af37993f91\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.484194 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b2cec974-8eb2-428d-8c59-97af37993f91-nova-extra-config-0\") pod \"b2cec974-8eb2-428d-8c59-97af37993f91\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.484213 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-cell1-compute-config-2\") pod \"b2cec974-8eb2-428d-8c59-97af37993f91\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.484261 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-migration-ssh-key-0\") pod \"b2cec974-8eb2-428d-8c59-97af37993f91\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.484294 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-ssh-key-openstack-edpm-ipam\") pod \"b2cec974-8eb2-428d-8c59-97af37993f91\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.484327 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-inventory\") pod \"b2cec974-8eb2-428d-8c59-97af37993f91\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.484366 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5ptq\" (UniqueName: \"kubernetes.io/projected/b2cec974-8eb2-428d-8c59-97af37993f91-kube-api-access-n5ptq\") pod \"b2cec974-8eb2-428d-8c59-97af37993f91\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.484393 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-combined-ca-bundle\") pod \"b2cec974-8eb2-428d-8c59-97af37993f91\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.484990 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-cell1-compute-config-0\") pod \"b2cec974-8eb2-428d-8c59-97af37993f91\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.485042 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-cell1-compute-config-3\") pod \"b2cec974-8eb2-428d-8c59-97af37993f91\" (UID: \"b2cec974-8eb2-428d-8c59-97af37993f91\") " Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.490777 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "b2cec974-8eb2-428d-8c59-97af37993f91" (UID: "b2cec974-8eb2-428d-8c59-97af37993f91"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.491674 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2cec974-8eb2-428d-8c59-97af37993f91-kube-api-access-n5ptq" (OuterVolumeSpecName: "kube-api-access-n5ptq") pod "b2cec974-8eb2-428d-8c59-97af37993f91" (UID: "b2cec974-8eb2-428d-8c59-97af37993f91"). InnerVolumeSpecName "kube-api-access-n5ptq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.521240 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "b2cec974-8eb2-428d-8c59-97af37993f91" (UID: "b2cec974-8eb2-428d-8c59-97af37993f91"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.522004 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-inventory" (OuterVolumeSpecName: "inventory") pod "b2cec974-8eb2-428d-8c59-97af37993f91" (UID: "b2cec974-8eb2-428d-8c59-97af37993f91"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.527587 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "b2cec974-8eb2-428d-8c59-97af37993f91" (UID: "b2cec974-8eb2-428d-8c59-97af37993f91"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.531325 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "b2cec974-8eb2-428d-8c59-97af37993f91" (UID: "b2cec974-8eb2-428d-8c59-97af37993f91"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.533819 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "b2cec974-8eb2-428d-8c59-97af37993f91" (UID: "b2cec974-8eb2-428d-8c59-97af37993f91"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.536108 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "b2cec974-8eb2-428d-8c59-97af37993f91" (UID: "b2cec974-8eb2-428d-8c59-97af37993f91"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.537335 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2cec974-8eb2-428d-8c59-97af37993f91-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "b2cec974-8eb2-428d-8c59-97af37993f91" (UID: "b2cec974-8eb2-428d-8c59-97af37993f91"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.541564 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "b2cec974-8eb2-428d-8c59-97af37993f91" (UID: "b2cec974-8eb2-428d-8c59-97af37993f91"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.560511 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b2cec974-8eb2-428d-8c59-97af37993f91" (UID: "b2cec974-8eb2-428d-8c59-97af37993f91"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.586698 5014 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.586739 5014 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.586752 5014 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.586761 5014 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.586773 5014 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b2cec974-8eb2-428d-8c59-97af37993f91-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.586781 5014 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.586791 5014 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.586799 5014 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.586824 5014 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.586834 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5ptq\" (UniqueName: \"kubernetes.io/projected/b2cec974-8eb2-428d-8c59-97af37993f91-kube-api-access-n5ptq\") on node \"crc\" DevicePath \"\"" Feb 28 05:14:44 crc kubenswrapper[5014]: I0228 05:14:44.586844 5014 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2cec974-8eb2-428d-8c59-97af37993f91-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.042400 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" event={"ID":"b2cec974-8eb2-428d-8c59-97af37993f91","Type":"ContainerDied","Data":"bb527299734737f843abc235983863560f018bff121678593d3d333722ef7d04"} Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.042465 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb527299734737f843abc235983863560f018bff121678593d3d333722ef7d04" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.042485 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-62n2s" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.162673 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5"] Feb 28 05:14:45 crc kubenswrapper[5014]: E0228 05:14:45.163494 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2cec974-8eb2-428d-8c59-97af37993f91" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.163521 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2cec974-8eb2-428d-8c59-97af37993f91" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 28 05:14:45 crc kubenswrapper[5014]: E0228 05:14:45.163548 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b5e4216-03d6-4fc8-93b1-9eb5cafbfc5e" containerName="oc" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.163557 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b5e4216-03d6-4fc8-93b1-9eb5cafbfc5e" containerName="oc" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.163794 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2cec974-8eb2-428d-8c59-97af37993f91" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.163839 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b5e4216-03d6-4fc8-93b1-9eb5cafbfc5e" containerName="oc" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.164583 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.167606 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.167779 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.167642 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.167992 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-6dz6b" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.169157 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.180757 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5"] Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.311894 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.311988 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.312018 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.312222 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2scm\" (UniqueName: \"kubernetes.io/projected/8bf54c30-88fb-46eb-8949-e2231e958201-kube-api-access-m2scm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.312285 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.312338 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.312416 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.413394 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.413503 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.413572 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.413606 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.414391 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.414541 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2scm\" (UniqueName: \"kubernetes.io/projected/8bf54c30-88fb-46eb-8949-e2231e958201-kube-api-access-m2scm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.414584 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.417183 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.418419 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.418876 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.419217 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.420012 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.420834 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.431518 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2scm\" (UniqueName: \"kubernetes.io/projected/8bf54c30-88fb-46eb-8949-e2231e958201-kube-api-access-m2scm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:14:45 crc kubenswrapper[5014]: I0228 05:14:45.480144 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:14:46 crc kubenswrapper[5014]: I0228 05:14:46.081760 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5"] Feb 28 05:14:46 crc kubenswrapper[5014]: W0228 05:14:46.089234 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bf54c30_88fb_46eb_8949_e2231e958201.slice/crio-f1bc39438527c688389a45535c056443625d205668be61bdcc44cd8d8daf7291 WatchSource:0}: Error finding container f1bc39438527c688389a45535c056443625d205668be61bdcc44cd8d8daf7291: Status 404 returned error can't find the container with id f1bc39438527c688389a45535c056443625d205668be61bdcc44cd8d8daf7291 Feb 28 05:14:47 crc kubenswrapper[5014]: I0228 05:14:47.068183 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" event={"ID":"8bf54c30-88fb-46eb-8949-e2231e958201","Type":"ContainerStarted","Data":"cedb393ba013dcf4f00af464b14900e21a32d0952f45f6236aba2b965f98eb94"} Feb 28 05:14:47 crc kubenswrapper[5014]: I0228 05:14:47.068876 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" event={"ID":"8bf54c30-88fb-46eb-8949-e2231e958201","Type":"ContainerStarted","Data":"f1bc39438527c688389a45535c056443625d205668be61bdcc44cd8d8daf7291"} Feb 28 05:14:47 crc kubenswrapper[5014]: I0228 05:14:47.099606 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" podStartSLOduration=1.530238862 podStartE2EDuration="2.099574127s" podCreationTimestamp="2026-02-28 05:14:45 +0000 UTC" firstStartedPulling="2026-02-28 05:14:46.092207091 +0000 UTC m=+2474.762333001" lastFinishedPulling="2026-02-28 05:14:46.661542326 +0000 UTC m=+2475.331668266" observedRunningTime="2026-02-28 05:14:47.088487535 +0000 UTC m=+2475.758613455" watchObservedRunningTime="2026-02-28 05:14:47.099574127 +0000 UTC m=+2475.769700037" Feb 28 05:14:54 crc kubenswrapper[5014]: I0228 05:14:54.173014 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:14:54 crc kubenswrapper[5014]: E0228 05:14:54.174104 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:14:57 crc kubenswrapper[5014]: I0228 05:14:57.835436 5014 scope.go:117] "RemoveContainer" containerID="f0348708d037922f7c1f4760c539f717f36d8c0c3fb3814fddf60f5a5e7f61fb" Feb 28 05:15:00 crc kubenswrapper[5014]: I0228 05:15:00.155668 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537595-vg2gz"] Feb 28 05:15:00 crc kubenswrapper[5014]: I0228 05:15:00.157769 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537595-vg2gz" Feb 28 05:15:00 crc kubenswrapper[5014]: I0228 05:15:00.160326 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 05:15:00 crc kubenswrapper[5014]: I0228 05:15:00.160326 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 05:15:00 crc kubenswrapper[5014]: I0228 05:15:00.169667 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537595-vg2gz"] Feb 28 05:15:00 crc kubenswrapper[5014]: I0228 05:15:00.300167 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/448cb37f-7613-425a-be40-a21e9800d247-secret-volume\") pod \"collect-profiles-29537595-vg2gz\" (UID: \"448cb37f-7613-425a-be40-a21e9800d247\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537595-vg2gz" Feb 28 05:15:00 crc kubenswrapper[5014]: I0228 05:15:00.300615 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/448cb37f-7613-425a-be40-a21e9800d247-config-volume\") pod \"collect-profiles-29537595-vg2gz\" (UID: \"448cb37f-7613-425a-be40-a21e9800d247\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537595-vg2gz" Feb 28 05:15:00 crc kubenswrapper[5014]: I0228 05:15:00.300885 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzkzh\" (UniqueName: \"kubernetes.io/projected/448cb37f-7613-425a-be40-a21e9800d247-kube-api-access-dzkzh\") pod \"collect-profiles-29537595-vg2gz\" (UID: \"448cb37f-7613-425a-be40-a21e9800d247\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537595-vg2gz" Feb 28 05:15:00 crc kubenswrapper[5014]: I0228 05:15:00.402378 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzkzh\" (UniqueName: \"kubernetes.io/projected/448cb37f-7613-425a-be40-a21e9800d247-kube-api-access-dzkzh\") pod \"collect-profiles-29537595-vg2gz\" (UID: \"448cb37f-7613-425a-be40-a21e9800d247\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537595-vg2gz" Feb 28 05:15:00 crc kubenswrapper[5014]: I0228 05:15:00.402562 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/448cb37f-7613-425a-be40-a21e9800d247-secret-volume\") pod \"collect-profiles-29537595-vg2gz\" (UID: \"448cb37f-7613-425a-be40-a21e9800d247\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537595-vg2gz" Feb 28 05:15:00 crc kubenswrapper[5014]: I0228 05:15:00.402781 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/448cb37f-7613-425a-be40-a21e9800d247-config-volume\") pod \"collect-profiles-29537595-vg2gz\" (UID: \"448cb37f-7613-425a-be40-a21e9800d247\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537595-vg2gz" Feb 28 05:15:00 crc kubenswrapper[5014]: I0228 05:15:00.404417 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/448cb37f-7613-425a-be40-a21e9800d247-config-volume\") pod \"collect-profiles-29537595-vg2gz\" (UID: \"448cb37f-7613-425a-be40-a21e9800d247\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537595-vg2gz" Feb 28 05:15:00 crc kubenswrapper[5014]: I0228 05:15:00.412519 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/448cb37f-7613-425a-be40-a21e9800d247-secret-volume\") pod \"collect-profiles-29537595-vg2gz\" (UID: \"448cb37f-7613-425a-be40-a21e9800d247\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537595-vg2gz" Feb 28 05:15:00 crc kubenswrapper[5014]: I0228 05:15:00.421380 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzkzh\" (UniqueName: \"kubernetes.io/projected/448cb37f-7613-425a-be40-a21e9800d247-kube-api-access-dzkzh\") pod \"collect-profiles-29537595-vg2gz\" (UID: \"448cb37f-7613-425a-be40-a21e9800d247\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537595-vg2gz" Feb 28 05:15:00 crc kubenswrapper[5014]: I0228 05:15:00.489672 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537595-vg2gz" Feb 28 05:15:00 crc kubenswrapper[5014]: I0228 05:15:00.927427 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537595-vg2gz"] Feb 28 05:15:00 crc kubenswrapper[5014]: W0228 05:15:00.933010 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod448cb37f_7613_425a_be40_a21e9800d247.slice/crio-47e33145ff2b0be6d84ea6eb53d6e3c8ee863d3fbcf01e8f97a9cd8c627d0b5b WatchSource:0}: Error finding container 47e33145ff2b0be6d84ea6eb53d6e3c8ee863d3fbcf01e8f97a9cd8c627d0b5b: Status 404 returned error can't find the container with id 47e33145ff2b0be6d84ea6eb53d6e3c8ee863d3fbcf01e8f97a9cd8c627d0b5b Feb 28 05:15:01 crc kubenswrapper[5014]: I0228 05:15:01.236632 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537595-vg2gz" event={"ID":"448cb37f-7613-425a-be40-a21e9800d247","Type":"ContainerStarted","Data":"05fd57be717739dfb83bef5718b63fde4621037c72018daec7b0b1c39c9673e5"} Feb 28 05:15:01 crc kubenswrapper[5014]: I0228 05:15:01.236947 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537595-vg2gz" event={"ID":"448cb37f-7613-425a-be40-a21e9800d247","Type":"ContainerStarted","Data":"47e33145ff2b0be6d84ea6eb53d6e3c8ee863d3fbcf01e8f97a9cd8c627d0b5b"} Feb 28 05:15:01 crc kubenswrapper[5014]: I0228 05:15:01.257668 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29537595-vg2gz" podStartSLOduration=1.257646255 podStartE2EDuration="1.257646255s" podCreationTimestamp="2026-02-28 05:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 05:15:01.254795237 +0000 UTC m=+2489.924921147" watchObservedRunningTime="2026-02-28 05:15:01.257646255 +0000 UTC m=+2489.927772165" Feb 28 05:15:01 crc kubenswrapper[5014]: E0228 05:15:01.364645 5014 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod448cb37f_7613_425a_be40_a21e9800d247.slice/crio-05fd57be717739dfb83bef5718b63fde4621037c72018daec7b0b1c39c9673e5.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod448cb37f_7613_425a_be40_a21e9800d247.slice/crio-conmon-05fd57be717739dfb83bef5718b63fde4621037c72018daec7b0b1c39c9673e5.scope\": RecentStats: unable to find data in memory cache]" Feb 28 05:15:02 crc kubenswrapper[5014]: I0228 05:15:02.245197 5014 generic.go:334] "Generic (PLEG): container finished" podID="448cb37f-7613-425a-be40-a21e9800d247" containerID="05fd57be717739dfb83bef5718b63fde4621037c72018daec7b0b1c39c9673e5" exitCode=0 Feb 28 05:15:02 crc kubenswrapper[5014]: I0228 05:15:02.245296 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537595-vg2gz" event={"ID":"448cb37f-7613-425a-be40-a21e9800d247","Type":"ContainerDied","Data":"05fd57be717739dfb83bef5718b63fde4621037c72018daec7b0b1c39c9673e5"} Feb 28 05:15:03 crc kubenswrapper[5014]: I0228 05:15:03.607261 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537595-vg2gz" Feb 28 05:15:03 crc kubenswrapper[5014]: I0228 05:15:03.779231 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/448cb37f-7613-425a-be40-a21e9800d247-secret-volume\") pod \"448cb37f-7613-425a-be40-a21e9800d247\" (UID: \"448cb37f-7613-425a-be40-a21e9800d247\") " Feb 28 05:15:03 crc kubenswrapper[5014]: I0228 05:15:03.779547 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/448cb37f-7613-425a-be40-a21e9800d247-config-volume\") pod \"448cb37f-7613-425a-be40-a21e9800d247\" (UID: \"448cb37f-7613-425a-be40-a21e9800d247\") " Feb 28 05:15:03 crc kubenswrapper[5014]: I0228 05:15:03.779614 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzkzh\" (UniqueName: \"kubernetes.io/projected/448cb37f-7613-425a-be40-a21e9800d247-kube-api-access-dzkzh\") pod \"448cb37f-7613-425a-be40-a21e9800d247\" (UID: \"448cb37f-7613-425a-be40-a21e9800d247\") " Feb 28 05:15:03 crc kubenswrapper[5014]: I0228 05:15:03.780428 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/448cb37f-7613-425a-be40-a21e9800d247-config-volume" (OuterVolumeSpecName: "config-volume") pod "448cb37f-7613-425a-be40-a21e9800d247" (UID: "448cb37f-7613-425a-be40-a21e9800d247"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 05:15:03 crc kubenswrapper[5014]: I0228 05:15:03.784707 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/448cb37f-7613-425a-be40-a21e9800d247-kube-api-access-dzkzh" (OuterVolumeSpecName: "kube-api-access-dzkzh") pod "448cb37f-7613-425a-be40-a21e9800d247" (UID: "448cb37f-7613-425a-be40-a21e9800d247"). InnerVolumeSpecName "kube-api-access-dzkzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:15:03 crc kubenswrapper[5014]: I0228 05:15:03.784771 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/448cb37f-7613-425a-be40-a21e9800d247-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "448cb37f-7613-425a-be40-a21e9800d247" (UID: "448cb37f-7613-425a-be40-a21e9800d247"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:15:03 crc kubenswrapper[5014]: I0228 05:15:03.882137 5014 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/448cb37f-7613-425a-be40-a21e9800d247-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 05:15:03 crc kubenswrapper[5014]: I0228 05:15:03.882175 5014 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/448cb37f-7613-425a-be40-a21e9800d247-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 05:15:03 crc kubenswrapper[5014]: I0228 05:15:03.882184 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzkzh\" (UniqueName: \"kubernetes.io/projected/448cb37f-7613-425a-be40-a21e9800d247-kube-api-access-dzkzh\") on node \"crc\" DevicePath \"\"" Feb 28 05:15:04 crc kubenswrapper[5014]: I0228 05:15:04.271962 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537595-vg2gz" event={"ID":"448cb37f-7613-425a-be40-a21e9800d247","Type":"ContainerDied","Data":"47e33145ff2b0be6d84ea6eb53d6e3c8ee863d3fbcf01e8f97a9cd8c627d0b5b"} Feb 28 05:15:04 crc kubenswrapper[5014]: I0228 05:15:04.272013 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47e33145ff2b0be6d84ea6eb53d6e3c8ee863d3fbcf01e8f97a9cd8c627d0b5b" Feb 28 05:15:04 crc kubenswrapper[5014]: I0228 05:15:04.272042 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537595-vg2gz" Feb 28 05:15:04 crc kubenswrapper[5014]: I0228 05:15:04.328483 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537550-hbznb"] Feb 28 05:15:04 crc kubenswrapper[5014]: I0228 05:15:04.335698 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537550-hbznb"] Feb 28 05:15:06 crc kubenswrapper[5014]: I0228 05:15:06.184708 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91c20ddd-76d6-4e47-a24e-ec090ff039de" path="/var/lib/kubelet/pods/91c20ddd-76d6-4e47-a24e-ec090ff039de/volumes" Feb 28 05:15:07 crc kubenswrapper[5014]: I0228 05:15:07.172088 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:15:07 crc kubenswrapper[5014]: E0228 05:15:07.172943 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:15:22 crc kubenswrapper[5014]: I0228 05:15:22.177305 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:15:22 crc kubenswrapper[5014]: E0228 05:15:22.179678 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:15:36 crc kubenswrapper[5014]: I0228 05:15:36.173696 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:15:36 crc kubenswrapper[5014]: E0228 05:15:36.175135 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:15:51 crc kubenswrapper[5014]: I0228 05:15:51.171840 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:15:51 crc kubenswrapper[5014]: E0228 05:15:51.173217 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:15:57 crc kubenswrapper[5014]: I0228 05:15:57.925789 5014 scope.go:117] "RemoveContainer" containerID="f2d472137bc70dded2e55dee4388671fe35e0e9737e1a729e2dfdfd32b64eb66" Feb 28 05:16:00 crc kubenswrapper[5014]: I0228 05:16:00.149197 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537596-78pqt"] Feb 28 05:16:00 crc kubenswrapper[5014]: E0228 05:16:00.149994 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="448cb37f-7613-425a-be40-a21e9800d247" containerName="collect-profiles" Feb 28 05:16:00 crc kubenswrapper[5014]: I0228 05:16:00.150009 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="448cb37f-7613-425a-be40-a21e9800d247" containerName="collect-profiles" Feb 28 05:16:00 crc kubenswrapper[5014]: I0228 05:16:00.150269 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="448cb37f-7613-425a-be40-a21e9800d247" containerName="collect-profiles" Feb 28 05:16:00 crc kubenswrapper[5014]: I0228 05:16:00.151054 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537596-78pqt" Feb 28 05:16:00 crc kubenswrapper[5014]: I0228 05:16:00.153254 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:16:00 crc kubenswrapper[5014]: I0228 05:16:00.154211 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:16:00 crc kubenswrapper[5014]: I0228 05:16:00.154353 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:16:00 crc kubenswrapper[5014]: I0228 05:16:00.161836 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537596-78pqt"] Feb 28 05:16:00 crc kubenswrapper[5014]: I0228 05:16:00.245613 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfkb4\" (UniqueName: \"kubernetes.io/projected/ce144927-6302-48e8-a467-a4cdb3f5f931-kube-api-access-zfkb4\") pod \"auto-csr-approver-29537596-78pqt\" (UID: \"ce144927-6302-48e8-a467-a4cdb3f5f931\") " pod="openshift-infra/auto-csr-approver-29537596-78pqt" Feb 28 05:16:00 crc kubenswrapper[5014]: I0228 05:16:00.348002 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfkb4\" (UniqueName: \"kubernetes.io/projected/ce144927-6302-48e8-a467-a4cdb3f5f931-kube-api-access-zfkb4\") pod \"auto-csr-approver-29537596-78pqt\" (UID: \"ce144927-6302-48e8-a467-a4cdb3f5f931\") " pod="openshift-infra/auto-csr-approver-29537596-78pqt" Feb 28 05:16:00 crc kubenswrapper[5014]: I0228 05:16:00.383718 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfkb4\" (UniqueName: \"kubernetes.io/projected/ce144927-6302-48e8-a467-a4cdb3f5f931-kube-api-access-zfkb4\") pod \"auto-csr-approver-29537596-78pqt\" (UID: \"ce144927-6302-48e8-a467-a4cdb3f5f931\") " pod="openshift-infra/auto-csr-approver-29537596-78pqt" Feb 28 05:16:00 crc kubenswrapper[5014]: I0228 05:16:00.502638 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537596-78pqt" Feb 28 05:16:01 crc kubenswrapper[5014]: I0228 05:16:00.999984 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537596-78pqt"] Feb 28 05:16:01 crc kubenswrapper[5014]: W0228 05:16:01.003015 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce144927_6302_48e8_a467_a4cdb3f5f931.slice/crio-d532698351074e04d21234432c8e050feb231c325313019a1fb8ed6c3cc7543b WatchSource:0}: Error finding container d532698351074e04d21234432c8e050feb231c325313019a1fb8ed6c3cc7543b: Status 404 returned error can't find the container with id d532698351074e04d21234432c8e050feb231c325313019a1fb8ed6c3cc7543b Feb 28 05:16:01 crc kubenswrapper[5014]: I0228 05:16:01.787179 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537596-78pqt" event={"ID":"ce144927-6302-48e8-a467-a4cdb3f5f931","Type":"ContainerStarted","Data":"d532698351074e04d21234432c8e050feb231c325313019a1fb8ed6c3cc7543b"} Feb 28 05:16:02 crc kubenswrapper[5014]: I0228 05:16:02.797282 5014 generic.go:334] "Generic (PLEG): container finished" podID="ce144927-6302-48e8-a467-a4cdb3f5f931" containerID="7979b5a6bb8f92deada7f44e2da6a74c73d130949bb5486c480d7a716fa17d82" exitCode=0 Feb 28 05:16:02 crc kubenswrapper[5014]: I0228 05:16:02.797357 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537596-78pqt" event={"ID":"ce144927-6302-48e8-a467-a4cdb3f5f931","Type":"ContainerDied","Data":"7979b5a6bb8f92deada7f44e2da6a74c73d130949bb5486c480d7a716fa17d82"} Feb 28 05:16:04 crc kubenswrapper[5014]: I0228 05:16:04.128918 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537596-78pqt" Feb 28 05:16:04 crc kubenswrapper[5014]: I0228 05:16:04.225079 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfkb4\" (UniqueName: \"kubernetes.io/projected/ce144927-6302-48e8-a467-a4cdb3f5f931-kube-api-access-zfkb4\") pod \"ce144927-6302-48e8-a467-a4cdb3f5f931\" (UID: \"ce144927-6302-48e8-a467-a4cdb3f5f931\") " Feb 28 05:16:04 crc kubenswrapper[5014]: I0228 05:16:04.232664 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce144927-6302-48e8-a467-a4cdb3f5f931-kube-api-access-zfkb4" (OuterVolumeSpecName: "kube-api-access-zfkb4") pod "ce144927-6302-48e8-a467-a4cdb3f5f931" (UID: "ce144927-6302-48e8-a467-a4cdb3f5f931"). InnerVolumeSpecName "kube-api-access-zfkb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:16:04 crc kubenswrapper[5014]: I0228 05:16:04.328069 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfkb4\" (UniqueName: \"kubernetes.io/projected/ce144927-6302-48e8-a467-a4cdb3f5f931-kube-api-access-zfkb4\") on node \"crc\" DevicePath \"\"" Feb 28 05:16:04 crc kubenswrapper[5014]: I0228 05:16:04.847529 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537596-78pqt" event={"ID":"ce144927-6302-48e8-a467-a4cdb3f5f931","Type":"ContainerDied","Data":"d532698351074e04d21234432c8e050feb231c325313019a1fb8ed6c3cc7543b"} Feb 28 05:16:04 crc kubenswrapper[5014]: I0228 05:16:04.847594 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d532698351074e04d21234432c8e050feb231c325313019a1fb8ed6c3cc7543b" Feb 28 05:16:04 crc kubenswrapper[5014]: I0228 05:16:04.847602 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537596-78pqt" Feb 28 05:16:05 crc kubenswrapper[5014]: I0228 05:16:05.202173 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537590-hj4pz"] Feb 28 05:16:05 crc kubenswrapper[5014]: I0228 05:16:05.210311 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537590-hj4pz"] Feb 28 05:16:06 crc kubenswrapper[5014]: I0228 05:16:06.172839 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:16:06 crc kubenswrapper[5014]: E0228 05:16:06.173483 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:16:06 crc kubenswrapper[5014]: I0228 05:16:06.187213 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26593845-31ab-4f4a-8386-ad7fc2e8f4f0" path="/var/lib/kubelet/pods/26593845-31ab-4f4a-8386-ad7fc2e8f4f0/volumes" Feb 28 05:16:08 crc kubenswrapper[5014]: I0228 05:16:08.077606 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m478h"] Feb 28 05:16:08 crc kubenswrapper[5014]: E0228 05:16:08.078342 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce144927-6302-48e8-a467-a4cdb3f5f931" containerName="oc" Feb 28 05:16:08 crc kubenswrapper[5014]: I0228 05:16:08.078355 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce144927-6302-48e8-a467-a4cdb3f5f931" containerName="oc" Feb 28 05:16:08 crc kubenswrapper[5014]: I0228 05:16:08.078545 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce144927-6302-48e8-a467-a4cdb3f5f931" containerName="oc" Feb 28 05:16:08 crc kubenswrapper[5014]: I0228 05:16:08.080122 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m478h" Feb 28 05:16:08 crc kubenswrapper[5014]: I0228 05:16:08.106898 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m478h"] Feb 28 05:16:08 crc kubenswrapper[5014]: I0228 05:16:08.207890 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d99ef1-f336-49ce-8d38-51b1fc3f1f62-catalog-content\") pod \"redhat-operators-m478h\" (UID: \"b0d99ef1-f336-49ce-8d38-51b1fc3f1f62\") " pod="openshift-marketplace/redhat-operators-m478h" Feb 28 05:16:08 crc kubenswrapper[5014]: I0228 05:16:08.207949 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmchv\" (UniqueName: \"kubernetes.io/projected/b0d99ef1-f336-49ce-8d38-51b1fc3f1f62-kube-api-access-nmchv\") pod \"redhat-operators-m478h\" (UID: \"b0d99ef1-f336-49ce-8d38-51b1fc3f1f62\") " pod="openshift-marketplace/redhat-operators-m478h" Feb 28 05:16:08 crc kubenswrapper[5014]: I0228 05:16:08.207985 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d99ef1-f336-49ce-8d38-51b1fc3f1f62-utilities\") pod \"redhat-operators-m478h\" (UID: \"b0d99ef1-f336-49ce-8d38-51b1fc3f1f62\") " pod="openshift-marketplace/redhat-operators-m478h" Feb 28 05:16:08 crc kubenswrapper[5014]: I0228 05:16:08.309501 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d99ef1-f336-49ce-8d38-51b1fc3f1f62-catalog-content\") pod \"redhat-operators-m478h\" (UID: \"b0d99ef1-f336-49ce-8d38-51b1fc3f1f62\") " pod="openshift-marketplace/redhat-operators-m478h" Feb 28 05:16:08 crc kubenswrapper[5014]: I0228 05:16:08.309551 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmchv\" (UniqueName: \"kubernetes.io/projected/b0d99ef1-f336-49ce-8d38-51b1fc3f1f62-kube-api-access-nmchv\") pod \"redhat-operators-m478h\" (UID: \"b0d99ef1-f336-49ce-8d38-51b1fc3f1f62\") " pod="openshift-marketplace/redhat-operators-m478h" Feb 28 05:16:08 crc kubenswrapper[5014]: I0228 05:16:08.309639 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d99ef1-f336-49ce-8d38-51b1fc3f1f62-utilities\") pod \"redhat-operators-m478h\" (UID: \"b0d99ef1-f336-49ce-8d38-51b1fc3f1f62\") " pod="openshift-marketplace/redhat-operators-m478h" Feb 28 05:16:08 crc kubenswrapper[5014]: I0228 05:16:08.310065 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d99ef1-f336-49ce-8d38-51b1fc3f1f62-catalog-content\") pod \"redhat-operators-m478h\" (UID: \"b0d99ef1-f336-49ce-8d38-51b1fc3f1f62\") " pod="openshift-marketplace/redhat-operators-m478h" Feb 28 05:16:08 crc kubenswrapper[5014]: I0228 05:16:08.310182 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d99ef1-f336-49ce-8d38-51b1fc3f1f62-utilities\") pod \"redhat-operators-m478h\" (UID: \"b0d99ef1-f336-49ce-8d38-51b1fc3f1f62\") " pod="openshift-marketplace/redhat-operators-m478h" Feb 28 05:16:08 crc kubenswrapper[5014]: I0228 05:16:08.340937 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmchv\" (UniqueName: \"kubernetes.io/projected/b0d99ef1-f336-49ce-8d38-51b1fc3f1f62-kube-api-access-nmchv\") pod \"redhat-operators-m478h\" (UID: \"b0d99ef1-f336-49ce-8d38-51b1fc3f1f62\") " pod="openshift-marketplace/redhat-operators-m478h" Feb 28 05:16:08 crc kubenswrapper[5014]: I0228 05:16:08.405426 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m478h" Feb 28 05:16:08 crc kubenswrapper[5014]: I0228 05:16:08.864100 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m478h"] Feb 28 05:16:08 crc kubenswrapper[5014]: I0228 05:16:08.891782 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m478h" event={"ID":"b0d99ef1-f336-49ce-8d38-51b1fc3f1f62","Type":"ContainerStarted","Data":"d0cd6a85482b48e11c9481fef64d8bdcf8431c24ac109bcf14b2494f6c92c1c2"} Feb 28 05:16:09 crc kubenswrapper[5014]: I0228 05:16:09.904047 5014 generic.go:334] "Generic (PLEG): container finished" podID="b0d99ef1-f336-49ce-8d38-51b1fc3f1f62" containerID="4ffe7337c025888bf804b643cde4549b51660a77a179a773ee34a24b6234563f" exitCode=0 Feb 28 05:16:09 crc kubenswrapper[5014]: I0228 05:16:09.904098 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m478h" event={"ID":"b0d99ef1-f336-49ce-8d38-51b1fc3f1f62","Type":"ContainerDied","Data":"4ffe7337c025888bf804b643cde4549b51660a77a179a773ee34a24b6234563f"} Feb 28 05:16:11 crc kubenswrapper[5014]: I0228 05:16:11.926107 5014 generic.go:334] "Generic (PLEG): container finished" podID="b0d99ef1-f336-49ce-8d38-51b1fc3f1f62" containerID="7e653bdfe1c07de28e333a0b46b13a22fd6bfc1808e4f70949ed8f0f000a98cb" exitCode=0 Feb 28 05:16:11 crc kubenswrapper[5014]: I0228 05:16:11.926184 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m478h" event={"ID":"b0d99ef1-f336-49ce-8d38-51b1fc3f1f62","Type":"ContainerDied","Data":"7e653bdfe1c07de28e333a0b46b13a22fd6bfc1808e4f70949ed8f0f000a98cb"} Feb 28 05:16:12 crc kubenswrapper[5014]: I0228 05:16:12.941008 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m478h" event={"ID":"b0d99ef1-f336-49ce-8d38-51b1fc3f1f62","Type":"ContainerStarted","Data":"7092616e183b8bfaace2908ef9a6d71abff8fc13cb2d8bd296b0a6ed6f48a55c"} Feb 28 05:16:12 crc kubenswrapper[5014]: I0228 05:16:12.976825 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m478h" podStartSLOduration=2.494735326 podStartE2EDuration="4.97666792s" podCreationTimestamp="2026-02-28 05:16:08 +0000 UTC" firstStartedPulling="2026-02-28 05:16:09.906384782 +0000 UTC m=+2558.576510702" lastFinishedPulling="2026-02-28 05:16:12.388317386 +0000 UTC m=+2561.058443296" observedRunningTime="2026-02-28 05:16:12.967414098 +0000 UTC m=+2561.637540038" watchObservedRunningTime="2026-02-28 05:16:12.97666792 +0000 UTC m=+2561.646793830" Feb 28 05:16:18 crc kubenswrapper[5014]: I0228 05:16:18.405694 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m478h" Feb 28 05:16:18 crc kubenswrapper[5014]: I0228 05:16:18.406111 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m478h" Feb 28 05:16:19 crc kubenswrapper[5014]: I0228 05:16:19.465027 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m478h" podUID="b0d99ef1-f336-49ce-8d38-51b1fc3f1f62" containerName="registry-server" probeResult="failure" output=< Feb 28 05:16:19 crc kubenswrapper[5014]: timeout: failed to connect service ":50051" within 1s Feb 28 05:16:19 crc kubenswrapper[5014]: > Feb 28 05:16:21 crc kubenswrapper[5014]: I0228 05:16:21.172998 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:16:21 crc kubenswrapper[5014]: E0228 05:16:21.174137 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:16:28 crc kubenswrapper[5014]: I0228 05:16:28.463609 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m478h" Feb 28 05:16:28 crc kubenswrapper[5014]: I0228 05:16:28.531105 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m478h" Feb 28 05:16:28 crc kubenswrapper[5014]: I0228 05:16:28.706875 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m478h"] Feb 28 05:16:30 crc kubenswrapper[5014]: I0228 05:16:30.145212 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m478h" podUID="b0d99ef1-f336-49ce-8d38-51b1fc3f1f62" containerName="registry-server" containerID="cri-o://7092616e183b8bfaace2908ef9a6d71abff8fc13cb2d8bd296b0a6ed6f48a55c" gracePeriod=2 Feb 28 05:16:30 crc kubenswrapper[5014]: I0228 05:16:30.615564 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m478h" Feb 28 05:16:30 crc kubenswrapper[5014]: I0228 05:16:30.685672 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d99ef1-f336-49ce-8d38-51b1fc3f1f62-catalog-content\") pod \"b0d99ef1-f336-49ce-8d38-51b1fc3f1f62\" (UID: \"b0d99ef1-f336-49ce-8d38-51b1fc3f1f62\") " Feb 28 05:16:30 crc kubenswrapper[5014]: I0228 05:16:30.685721 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d99ef1-f336-49ce-8d38-51b1fc3f1f62-utilities\") pod \"b0d99ef1-f336-49ce-8d38-51b1fc3f1f62\" (UID: \"b0d99ef1-f336-49ce-8d38-51b1fc3f1f62\") " Feb 28 05:16:30 crc kubenswrapper[5014]: I0228 05:16:30.685760 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmchv\" (UniqueName: \"kubernetes.io/projected/b0d99ef1-f336-49ce-8d38-51b1fc3f1f62-kube-api-access-nmchv\") pod \"b0d99ef1-f336-49ce-8d38-51b1fc3f1f62\" (UID: \"b0d99ef1-f336-49ce-8d38-51b1fc3f1f62\") " Feb 28 05:16:30 crc kubenswrapper[5014]: I0228 05:16:30.688830 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0d99ef1-f336-49ce-8d38-51b1fc3f1f62-utilities" (OuterVolumeSpecName: "utilities") pod "b0d99ef1-f336-49ce-8d38-51b1fc3f1f62" (UID: "b0d99ef1-f336-49ce-8d38-51b1fc3f1f62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:16:30 crc kubenswrapper[5014]: I0228 05:16:30.691470 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0d99ef1-f336-49ce-8d38-51b1fc3f1f62-kube-api-access-nmchv" (OuterVolumeSpecName: "kube-api-access-nmchv") pod "b0d99ef1-f336-49ce-8d38-51b1fc3f1f62" (UID: "b0d99ef1-f336-49ce-8d38-51b1fc3f1f62"). InnerVolumeSpecName "kube-api-access-nmchv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:16:30 crc kubenswrapper[5014]: I0228 05:16:30.787873 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b0d99ef1-f336-49ce-8d38-51b1fc3f1f62-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 05:16:30 crc kubenswrapper[5014]: I0228 05:16:30.787902 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nmchv\" (UniqueName: \"kubernetes.io/projected/b0d99ef1-f336-49ce-8d38-51b1fc3f1f62-kube-api-access-nmchv\") on node \"crc\" DevicePath \"\"" Feb 28 05:16:30 crc kubenswrapper[5014]: I0228 05:16:30.812421 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0d99ef1-f336-49ce-8d38-51b1fc3f1f62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b0d99ef1-f336-49ce-8d38-51b1fc3f1f62" (UID: "b0d99ef1-f336-49ce-8d38-51b1fc3f1f62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:16:30 crc kubenswrapper[5014]: I0228 05:16:30.889780 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b0d99ef1-f336-49ce-8d38-51b1fc3f1f62-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 05:16:31 crc kubenswrapper[5014]: I0228 05:16:31.156830 5014 generic.go:334] "Generic (PLEG): container finished" podID="b0d99ef1-f336-49ce-8d38-51b1fc3f1f62" containerID="7092616e183b8bfaace2908ef9a6d71abff8fc13cb2d8bd296b0a6ed6f48a55c" exitCode=0 Feb 28 05:16:31 crc kubenswrapper[5014]: I0228 05:16:31.156884 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m478h" event={"ID":"b0d99ef1-f336-49ce-8d38-51b1fc3f1f62","Type":"ContainerDied","Data":"7092616e183b8bfaace2908ef9a6d71abff8fc13cb2d8bd296b0a6ed6f48a55c"} Feb 28 05:16:31 crc kubenswrapper[5014]: I0228 05:16:31.156916 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m478h" Feb 28 05:16:31 crc kubenswrapper[5014]: I0228 05:16:31.156945 5014 scope.go:117] "RemoveContainer" containerID="7092616e183b8bfaace2908ef9a6d71abff8fc13cb2d8bd296b0a6ed6f48a55c" Feb 28 05:16:31 crc kubenswrapper[5014]: I0228 05:16:31.156929 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m478h" event={"ID":"b0d99ef1-f336-49ce-8d38-51b1fc3f1f62","Type":"ContainerDied","Data":"d0cd6a85482b48e11c9481fef64d8bdcf8431c24ac109bcf14b2494f6c92c1c2"} Feb 28 05:16:31 crc kubenswrapper[5014]: I0228 05:16:31.184635 5014 scope.go:117] "RemoveContainer" containerID="7e653bdfe1c07de28e333a0b46b13a22fd6bfc1808e4f70949ed8f0f000a98cb" Feb 28 05:16:31 crc kubenswrapper[5014]: I0228 05:16:31.192584 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m478h"] Feb 28 05:16:31 crc kubenswrapper[5014]: I0228 05:16:31.206529 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m478h"] Feb 28 05:16:31 crc kubenswrapper[5014]: I0228 05:16:31.227663 5014 scope.go:117] "RemoveContainer" containerID="4ffe7337c025888bf804b643cde4549b51660a77a179a773ee34a24b6234563f" Feb 28 05:16:31 crc kubenswrapper[5014]: I0228 05:16:31.253968 5014 scope.go:117] "RemoveContainer" containerID="7092616e183b8bfaace2908ef9a6d71abff8fc13cb2d8bd296b0a6ed6f48a55c" Feb 28 05:16:31 crc kubenswrapper[5014]: E0228 05:16:31.254491 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7092616e183b8bfaace2908ef9a6d71abff8fc13cb2d8bd296b0a6ed6f48a55c\": container with ID starting with 7092616e183b8bfaace2908ef9a6d71abff8fc13cb2d8bd296b0a6ed6f48a55c not found: ID does not exist" containerID="7092616e183b8bfaace2908ef9a6d71abff8fc13cb2d8bd296b0a6ed6f48a55c" Feb 28 05:16:31 crc kubenswrapper[5014]: I0228 05:16:31.254556 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7092616e183b8bfaace2908ef9a6d71abff8fc13cb2d8bd296b0a6ed6f48a55c"} err="failed to get container status \"7092616e183b8bfaace2908ef9a6d71abff8fc13cb2d8bd296b0a6ed6f48a55c\": rpc error: code = NotFound desc = could not find container \"7092616e183b8bfaace2908ef9a6d71abff8fc13cb2d8bd296b0a6ed6f48a55c\": container with ID starting with 7092616e183b8bfaace2908ef9a6d71abff8fc13cb2d8bd296b0a6ed6f48a55c not found: ID does not exist" Feb 28 05:16:31 crc kubenswrapper[5014]: I0228 05:16:31.254582 5014 scope.go:117] "RemoveContainer" containerID="7e653bdfe1c07de28e333a0b46b13a22fd6bfc1808e4f70949ed8f0f000a98cb" Feb 28 05:16:31 crc kubenswrapper[5014]: E0228 05:16:31.254913 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e653bdfe1c07de28e333a0b46b13a22fd6bfc1808e4f70949ed8f0f000a98cb\": container with ID starting with 7e653bdfe1c07de28e333a0b46b13a22fd6bfc1808e4f70949ed8f0f000a98cb not found: ID does not exist" containerID="7e653bdfe1c07de28e333a0b46b13a22fd6bfc1808e4f70949ed8f0f000a98cb" Feb 28 05:16:31 crc kubenswrapper[5014]: I0228 05:16:31.254945 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e653bdfe1c07de28e333a0b46b13a22fd6bfc1808e4f70949ed8f0f000a98cb"} err="failed to get container status \"7e653bdfe1c07de28e333a0b46b13a22fd6bfc1808e4f70949ed8f0f000a98cb\": rpc error: code = NotFound desc = could not find container \"7e653bdfe1c07de28e333a0b46b13a22fd6bfc1808e4f70949ed8f0f000a98cb\": container with ID starting with 7e653bdfe1c07de28e333a0b46b13a22fd6bfc1808e4f70949ed8f0f000a98cb not found: ID does not exist" Feb 28 05:16:31 crc kubenswrapper[5014]: I0228 05:16:31.254968 5014 scope.go:117] "RemoveContainer" containerID="4ffe7337c025888bf804b643cde4549b51660a77a179a773ee34a24b6234563f" Feb 28 05:16:31 crc kubenswrapper[5014]: E0228 05:16:31.255209 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ffe7337c025888bf804b643cde4549b51660a77a179a773ee34a24b6234563f\": container with ID starting with 4ffe7337c025888bf804b643cde4549b51660a77a179a773ee34a24b6234563f not found: ID does not exist" containerID="4ffe7337c025888bf804b643cde4549b51660a77a179a773ee34a24b6234563f" Feb 28 05:16:31 crc kubenswrapper[5014]: I0228 05:16:31.255236 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ffe7337c025888bf804b643cde4549b51660a77a179a773ee34a24b6234563f"} err="failed to get container status \"4ffe7337c025888bf804b643cde4549b51660a77a179a773ee34a24b6234563f\": rpc error: code = NotFound desc = could not find container \"4ffe7337c025888bf804b643cde4549b51660a77a179a773ee34a24b6234563f\": container with ID starting with 4ffe7337c025888bf804b643cde4549b51660a77a179a773ee34a24b6234563f not found: ID does not exist" Feb 28 05:16:32 crc kubenswrapper[5014]: I0228 05:16:32.184419 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0d99ef1-f336-49ce-8d38-51b1fc3f1f62" path="/var/lib/kubelet/pods/b0d99ef1-f336-49ce-8d38-51b1fc3f1f62/volumes" Feb 28 05:16:34 crc kubenswrapper[5014]: I0228 05:16:34.171970 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:16:34 crc kubenswrapper[5014]: E0228 05:16:34.173414 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:16:45 crc kubenswrapper[5014]: I0228 05:16:45.173209 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:16:45 crc kubenswrapper[5014]: E0228 05:16:45.174117 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:16:57 crc kubenswrapper[5014]: I0228 05:16:57.982096 5014 scope.go:117] "RemoveContainer" containerID="54aa650ad7a665cd5e8d83bd159021cde4be4d79deca98cc9bbff39463682a94" Feb 28 05:16:58 crc kubenswrapper[5014]: I0228 05:16:58.172010 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:16:58 crc kubenswrapper[5014]: E0228 05:16:58.172380 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:17:00 crc kubenswrapper[5014]: I0228 05:17:00.468286 5014 generic.go:334] "Generic (PLEG): container finished" podID="8bf54c30-88fb-46eb-8949-e2231e958201" containerID="cedb393ba013dcf4f00af464b14900e21a32d0952f45f6236aba2b965f98eb94" exitCode=0 Feb 28 05:17:00 crc kubenswrapper[5014]: I0228 05:17:00.468367 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" event={"ID":"8bf54c30-88fb-46eb-8949-e2231e958201","Type":"ContainerDied","Data":"cedb393ba013dcf4f00af464b14900e21a32d0952f45f6236aba2b965f98eb94"} Feb 28 05:17:01 crc kubenswrapper[5014]: I0228 05:17:01.914726 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:17:02 crc kubenswrapper[5014]: I0228 05:17:02.063918 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2scm\" (UniqueName: \"kubernetes.io/projected/8bf54c30-88fb-46eb-8949-e2231e958201-kube-api-access-m2scm\") pod \"8bf54c30-88fb-46eb-8949-e2231e958201\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " Feb 28 05:17:02 crc kubenswrapper[5014]: I0228 05:17:02.064009 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-ceilometer-compute-config-data-1\") pod \"8bf54c30-88fb-46eb-8949-e2231e958201\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " Feb 28 05:17:02 crc kubenswrapper[5014]: I0228 05:17:02.064034 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-ceilometer-compute-config-data-0\") pod \"8bf54c30-88fb-46eb-8949-e2231e958201\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " Feb 28 05:17:02 crc kubenswrapper[5014]: I0228 05:17:02.064062 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-inventory\") pod \"8bf54c30-88fb-46eb-8949-e2231e958201\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " Feb 28 05:17:02 crc kubenswrapper[5014]: I0228 05:17:02.064155 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-ssh-key-openstack-edpm-ipam\") pod \"8bf54c30-88fb-46eb-8949-e2231e958201\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " Feb 28 05:17:02 crc kubenswrapper[5014]: I0228 05:17:02.064187 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-ceilometer-compute-config-data-2\") pod \"8bf54c30-88fb-46eb-8949-e2231e958201\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " Feb 28 05:17:02 crc kubenswrapper[5014]: I0228 05:17:02.064206 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-telemetry-combined-ca-bundle\") pod \"8bf54c30-88fb-46eb-8949-e2231e958201\" (UID: \"8bf54c30-88fb-46eb-8949-e2231e958201\") " Feb 28 05:17:02 crc kubenswrapper[5014]: I0228 05:17:02.072003 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "8bf54c30-88fb-46eb-8949-e2231e958201" (UID: "8bf54c30-88fb-46eb-8949-e2231e958201"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:17:02 crc kubenswrapper[5014]: I0228 05:17:02.085355 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bf54c30-88fb-46eb-8949-e2231e958201-kube-api-access-m2scm" (OuterVolumeSpecName: "kube-api-access-m2scm") pod "8bf54c30-88fb-46eb-8949-e2231e958201" (UID: "8bf54c30-88fb-46eb-8949-e2231e958201"). InnerVolumeSpecName "kube-api-access-m2scm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:17:02 crc kubenswrapper[5014]: I0228 05:17:02.097284 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "8bf54c30-88fb-46eb-8949-e2231e958201" (UID: "8bf54c30-88fb-46eb-8949-e2231e958201"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:17:02 crc kubenswrapper[5014]: I0228 05:17:02.098390 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8bf54c30-88fb-46eb-8949-e2231e958201" (UID: "8bf54c30-88fb-46eb-8949-e2231e958201"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:17:02 crc kubenswrapper[5014]: I0228 05:17:02.099587 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-inventory" (OuterVolumeSpecName: "inventory") pod "8bf54c30-88fb-46eb-8949-e2231e958201" (UID: "8bf54c30-88fb-46eb-8949-e2231e958201"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:17:02 crc kubenswrapper[5014]: I0228 05:17:02.104061 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "8bf54c30-88fb-46eb-8949-e2231e958201" (UID: "8bf54c30-88fb-46eb-8949-e2231e958201"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:17:02 crc kubenswrapper[5014]: I0228 05:17:02.111228 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "8bf54c30-88fb-46eb-8949-e2231e958201" (UID: "8bf54c30-88fb-46eb-8949-e2231e958201"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:17:02 crc kubenswrapper[5014]: I0228 05:17:02.167197 5014 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 28 05:17:02 crc kubenswrapper[5014]: I0228 05:17:02.167228 5014 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 28 05:17:02 crc kubenswrapper[5014]: I0228 05:17:02.167243 5014 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-inventory\") on node \"crc\" DevicePath \"\"" Feb 28 05:17:02 crc kubenswrapper[5014]: I0228 05:17:02.167263 5014 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 28 05:17:02 crc kubenswrapper[5014]: I0228 05:17:02.167280 5014 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 28 05:17:02 crc kubenswrapper[5014]: I0228 05:17:02.167291 5014 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bf54c30-88fb-46eb-8949-e2231e958201-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 28 05:17:02 crc kubenswrapper[5014]: I0228 05:17:02.167307 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2scm\" (UniqueName: \"kubernetes.io/projected/8bf54c30-88fb-46eb-8949-e2231e958201-kube-api-access-m2scm\") on node \"crc\" DevicePath \"\"" Feb 28 05:17:02 crc kubenswrapper[5014]: I0228 05:17:02.489303 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" event={"ID":"8bf54c30-88fb-46eb-8949-e2231e958201","Type":"ContainerDied","Data":"f1bc39438527c688389a45535c056443625d205668be61bdcc44cd8d8daf7291"} Feb 28 05:17:02 crc kubenswrapper[5014]: I0228 05:17:02.489820 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1bc39438527c688389a45535c056443625d205668be61bdcc44cd8d8daf7291" Feb 28 05:17:02 crc kubenswrapper[5014]: I0228 05:17:02.489356 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5" Feb 28 05:17:05 crc kubenswrapper[5014]: E0228 05:17:05.140879 5014 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.150:46260->38.102.83.150:33019: write tcp 38.102.83.150:46260->38.102.83.150:33019: write: broken pipe Feb 28 05:17:11 crc kubenswrapper[5014]: I0228 05:17:11.172312 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:17:11 crc kubenswrapper[5014]: E0228 05:17:11.173111 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:17:23 crc kubenswrapper[5014]: I0228 05:17:23.172301 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:17:23 crc kubenswrapper[5014]: E0228 05:17:23.173050 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:17:36 crc kubenswrapper[5014]: I0228 05:17:36.172376 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:17:36 crc kubenswrapper[5014]: E0228 05:17:36.173331 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:17:51 crc kubenswrapper[5014]: I0228 05:17:51.172413 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:17:51 crc kubenswrapper[5014]: E0228 05:17:51.173177 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.075510 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 28 05:18:00 crc kubenswrapper[5014]: E0228 05:18:00.076368 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d99ef1-f336-49ce-8d38-51b1fc3f1f62" containerName="extract-utilities" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.076381 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d99ef1-f336-49ce-8d38-51b1fc3f1f62" containerName="extract-utilities" Feb 28 05:18:00 crc kubenswrapper[5014]: E0228 05:18:00.076391 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d99ef1-f336-49ce-8d38-51b1fc3f1f62" containerName="extract-content" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.076397 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d99ef1-f336-49ce-8d38-51b1fc3f1f62" containerName="extract-content" Feb 28 05:18:00 crc kubenswrapper[5014]: E0228 05:18:00.076410 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0d99ef1-f336-49ce-8d38-51b1fc3f1f62" containerName="registry-server" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.076418 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0d99ef1-f336-49ce-8d38-51b1fc3f1f62" containerName="registry-server" Feb 28 05:18:00 crc kubenswrapper[5014]: E0228 05:18:00.076432 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bf54c30-88fb-46eb-8949-e2231e958201" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.076439 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bf54c30-88fb-46eb-8949-e2231e958201" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.076615 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bf54c30-88fb-46eb-8949-e2231e958201" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.076635 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0d99ef1-f336-49ce-8d38-51b1fc3f1f62" containerName="registry-server" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.077239 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.079745 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-l2zhb" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.080199 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.080254 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.080802 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.093475 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.143272 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537598-l8ms6"] Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.145443 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537598-l8ms6" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.147732 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.147918 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.148043 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.153518 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537598-l8ms6"] Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.201761 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2db9b9b7-c55d-4b8b-b51b-cd081afed742-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.201955 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2db9b9b7-c55d-4b8b-b51b-cd081afed742-config-data\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.201991 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2db9b9b7-c55d-4b8b-b51b-cd081afed742-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.202017 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.202057 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwr7s\" (UniqueName: \"kubernetes.io/projected/2db9b9b7-c55d-4b8b-b51b-cd081afed742-kube-api-access-kwr7s\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.202081 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2db9b9b7-c55d-4b8b-b51b-cd081afed742-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.202103 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2db9b9b7-c55d-4b8b-b51b-cd081afed742-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.202156 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2db9b9b7-c55d-4b8b-b51b-cd081afed742-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.202194 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2db9b9b7-c55d-4b8b-b51b-cd081afed742-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.303722 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwr7s\" (UniqueName: \"kubernetes.io/projected/2db9b9b7-c55d-4b8b-b51b-cd081afed742-kube-api-access-kwr7s\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.303777 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2db9b9b7-c55d-4b8b-b51b-cd081afed742-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.303825 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2db9b9b7-c55d-4b8b-b51b-cd081afed742-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.303866 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45swv\" (UniqueName: \"kubernetes.io/projected/cdbd7594-dc62-4b56-a5f4-dc5c9d92cdd2-kube-api-access-45swv\") pod \"auto-csr-approver-29537598-l8ms6\" (UID: \"cdbd7594-dc62-4b56-a5f4-dc5c9d92cdd2\") " pod="openshift-infra/auto-csr-approver-29537598-l8ms6" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.303966 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2db9b9b7-c55d-4b8b-b51b-cd081afed742-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.304010 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2db9b9b7-c55d-4b8b-b51b-cd081afed742-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.304034 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2db9b9b7-c55d-4b8b-b51b-cd081afed742-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.305045 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2db9b9b7-c55d-4b8b-b51b-cd081afed742-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.305299 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2db9b9b7-c55d-4b8b-b51b-cd081afed742-config-data\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.305443 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2db9b9b7-c55d-4b8b-b51b-cd081afed742-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.305915 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2db9b9b7-c55d-4b8b-b51b-cd081afed742-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.306437 5014 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.306773 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2db9b9b7-c55d-4b8b-b51b-cd081afed742-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.305626 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.307047 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2db9b9b7-c55d-4b8b-b51b-cd081afed742-config-data\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.312735 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2db9b9b7-c55d-4b8b-b51b-cd081afed742-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.313048 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2db9b9b7-c55d-4b8b-b51b-cd081afed742-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.314073 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2db9b9b7-c55d-4b8b-b51b-cd081afed742-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.325881 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwr7s\" (UniqueName: \"kubernetes.io/projected/2db9b9b7-c55d-4b8b-b51b-cd081afed742-kube-api-access-kwr7s\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.340643 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.408970 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45swv\" (UniqueName: \"kubernetes.io/projected/cdbd7594-dc62-4b56-a5f4-dc5c9d92cdd2-kube-api-access-45swv\") pod \"auto-csr-approver-29537598-l8ms6\" (UID: \"cdbd7594-dc62-4b56-a5f4-dc5c9d92cdd2\") " pod="openshift-infra/auto-csr-approver-29537598-l8ms6" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.421563 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.432887 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45swv\" (UniqueName: \"kubernetes.io/projected/cdbd7594-dc62-4b56-a5f4-dc5c9d92cdd2-kube-api-access-45swv\") pod \"auto-csr-approver-29537598-l8ms6\" (UID: \"cdbd7594-dc62-4b56-a5f4-dc5c9d92cdd2\") " pod="openshift-infra/auto-csr-approver-29537598-l8ms6" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.470092 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537598-l8ms6" Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.888584 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 28 05:18:00 crc kubenswrapper[5014]: W0228 05:18:00.896365 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2db9b9b7_c55d_4b8b_b51b_cd081afed742.slice/crio-e1cd5bb249ebb17c0528846b96b7206a820d090637209bd8a5a61c076bbe489b WatchSource:0}: Error finding container e1cd5bb249ebb17c0528846b96b7206a820d090637209bd8a5a61c076bbe489b: Status 404 returned error can't find the container with id e1cd5bb249ebb17c0528846b96b7206a820d090637209bd8a5a61c076bbe489b Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.899313 5014 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 05:18:00 crc kubenswrapper[5014]: I0228 05:18:00.956734 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537598-l8ms6"] Feb 28 05:18:00 crc kubenswrapper[5014]: W0228 05:18:00.956747 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdbd7594_dc62_4b56_a5f4_dc5c9d92cdd2.slice/crio-9a3daada27549d20a145b6e6183ef96faceb13e5a266400dac416860b3396fcd WatchSource:0}: Error finding container 9a3daada27549d20a145b6e6183ef96faceb13e5a266400dac416860b3396fcd: Status 404 returned error can't find the container with id 9a3daada27549d20a145b6e6183ef96faceb13e5a266400dac416860b3396fcd Feb 28 05:18:01 crc kubenswrapper[5014]: I0228 05:18:01.135672 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537598-l8ms6" event={"ID":"cdbd7594-dc62-4b56-a5f4-dc5c9d92cdd2","Type":"ContainerStarted","Data":"9a3daada27549d20a145b6e6183ef96faceb13e5a266400dac416860b3396fcd"} Feb 28 05:18:01 crc kubenswrapper[5014]: I0228 05:18:01.137633 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2db9b9b7-c55d-4b8b-b51b-cd081afed742","Type":"ContainerStarted","Data":"e1cd5bb249ebb17c0528846b96b7206a820d090637209bd8a5a61c076bbe489b"} Feb 28 05:18:02 crc kubenswrapper[5014]: I0228 05:18:02.148523 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537598-l8ms6" event={"ID":"cdbd7594-dc62-4b56-a5f4-dc5c9d92cdd2","Type":"ContainerStarted","Data":"e67ba660f93190165df44c6a3ec26e545e5b7fef96190973e226469ac9db3a01"} Feb 28 05:18:02 crc kubenswrapper[5014]: I0228 05:18:02.166748 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537598-l8ms6" podStartSLOduration=1.307648539 podStartE2EDuration="2.166730137s" podCreationTimestamp="2026-02-28 05:18:00 +0000 UTC" firstStartedPulling="2026-02-28 05:18:00.960734772 +0000 UTC m=+2669.630860682" lastFinishedPulling="2026-02-28 05:18:01.81981637 +0000 UTC m=+2670.489942280" observedRunningTime="2026-02-28 05:18:02.163490489 +0000 UTC m=+2670.833616399" watchObservedRunningTime="2026-02-28 05:18:02.166730137 +0000 UTC m=+2670.836856047" Feb 28 05:18:03 crc kubenswrapper[5014]: I0228 05:18:03.173516 5014 generic.go:334] "Generic (PLEG): container finished" podID="cdbd7594-dc62-4b56-a5f4-dc5c9d92cdd2" containerID="e67ba660f93190165df44c6a3ec26e545e5b7fef96190973e226469ac9db3a01" exitCode=0 Feb 28 05:18:03 crc kubenswrapper[5014]: I0228 05:18:03.173659 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537598-l8ms6" event={"ID":"cdbd7594-dc62-4b56-a5f4-dc5c9d92cdd2","Type":"ContainerDied","Data":"e67ba660f93190165df44c6a3ec26e545e5b7fef96190973e226469ac9db3a01"} Feb 28 05:18:04 crc kubenswrapper[5014]: I0228 05:18:04.172613 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:18:04 crc kubenswrapper[5014]: E0228 05:18:04.172923 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:18:06 crc kubenswrapper[5014]: I0228 05:18:06.209958 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537598-l8ms6" event={"ID":"cdbd7594-dc62-4b56-a5f4-dc5c9d92cdd2","Type":"ContainerDied","Data":"9a3daada27549d20a145b6e6183ef96faceb13e5a266400dac416860b3396fcd"} Feb 28 05:18:06 crc kubenswrapper[5014]: I0228 05:18:06.210676 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a3daada27549d20a145b6e6183ef96faceb13e5a266400dac416860b3396fcd" Feb 28 05:18:06 crc kubenswrapper[5014]: I0228 05:18:06.288066 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537598-l8ms6" Feb 28 05:18:06 crc kubenswrapper[5014]: I0228 05:18:06.434596 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45swv\" (UniqueName: \"kubernetes.io/projected/cdbd7594-dc62-4b56-a5f4-dc5c9d92cdd2-kube-api-access-45swv\") pod \"cdbd7594-dc62-4b56-a5f4-dc5c9d92cdd2\" (UID: \"cdbd7594-dc62-4b56-a5f4-dc5c9d92cdd2\") " Feb 28 05:18:06 crc kubenswrapper[5014]: I0228 05:18:06.443757 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdbd7594-dc62-4b56-a5f4-dc5c9d92cdd2-kube-api-access-45swv" (OuterVolumeSpecName: "kube-api-access-45swv") pod "cdbd7594-dc62-4b56-a5f4-dc5c9d92cdd2" (UID: "cdbd7594-dc62-4b56-a5f4-dc5c9d92cdd2"). InnerVolumeSpecName "kube-api-access-45swv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:18:06 crc kubenswrapper[5014]: I0228 05:18:06.536458 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45swv\" (UniqueName: \"kubernetes.io/projected/cdbd7594-dc62-4b56-a5f4-dc5c9d92cdd2-kube-api-access-45swv\") on node \"crc\" DevicePath \"\"" Feb 28 05:18:07 crc kubenswrapper[5014]: I0228 05:18:07.219410 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537598-l8ms6" Feb 28 05:18:07 crc kubenswrapper[5014]: I0228 05:18:07.357167 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537592-kttmq"] Feb 28 05:18:07 crc kubenswrapper[5014]: I0228 05:18:07.365382 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537592-kttmq"] Feb 28 05:18:08 crc kubenswrapper[5014]: I0228 05:18:08.185499 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d9ab98e-b395-4b7f-b52a-976c3a333c37" path="/var/lib/kubelet/pods/8d9ab98e-b395-4b7f-b52a-976c3a333c37/volumes" Feb 28 05:18:18 crc kubenswrapper[5014]: I0228 05:18:18.172201 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:18:30 crc kubenswrapper[5014]: E0228 05:18:30.278219 5014 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 28 05:18:30 crc kubenswrapper[5014]: E0228 05:18:30.278740 5014 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kwr7s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(2db9b9b7-c55d-4b8b-b51b-cd081afed742): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 28 05:18:30 crc kubenswrapper[5014]: E0228 05:18:30.279932 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="2db9b9b7-c55d-4b8b-b51b-cd081afed742" Feb 28 05:18:30 crc kubenswrapper[5014]: I0228 05:18:30.469209 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerStarted","Data":"c456e5f386c62d2fe58c1ccf175f9bdaa457a2719c956898c0819998d2ac4b45"} Feb 28 05:18:30 crc kubenswrapper[5014]: E0228 05:18:30.470519 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="2db9b9b7-c55d-4b8b-b51b-cd081afed742" Feb 28 05:18:44 crc kubenswrapper[5014]: I0228 05:18:44.621421 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 28 05:18:46 crc kubenswrapper[5014]: I0228 05:18:46.667397 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2db9b9b7-c55d-4b8b-b51b-cd081afed742","Type":"ContainerStarted","Data":"0784ead89c86d88464af06b257b8c6833b140cddf6dedbfa095feebd3952a93e"} Feb 28 05:18:46 crc kubenswrapper[5014]: I0228 05:18:46.688173 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.9677838789999997 podStartE2EDuration="47.688155379s" podCreationTimestamp="2026-02-28 05:17:59 +0000 UTC" firstStartedPulling="2026-02-28 05:18:00.899125236 +0000 UTC m=+2669.569251146" lastFinishedPulling="2026-02-28 05:18:44.619496736 +0000 UTC m=+2713.289622646" observedRunningTime="2026-02-28 05:18:46.686403021 +0000 UTC m=+2715.356528971" watchObservedRunningTime="2026-02-28 05:18:46.688155379 +0000 UTC m=+2715.358281309" Feb 28 05:18:58 crc kubenswrapper[5014]: I0228 05:18:58.115150 5014 scope.go:117] "RemoveContainer" containerID="3f6f897728b5304a38a0863fe7ef1e0c1c337da6dda268fb4e5c239c5f60a962" Feb 28 05:20:00 crc kubenswrapper[5014]: I0228 05:20:00.150408 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537600-w7ql9"] Feb 28 05:20:00 crc kubenswrapper[5014]: E0228 05:20:00.151993 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdbd7594-dc62-4b56-a5f4-dc5c9d92cdd2" containerName="oc" Feb 28 05:20:00 crc kubenswrapper[5014]: I0228 05:20:00.152027 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdbd7594-dc62-4b56-a5f4-dc5c9d92cdd2" containerName="oc" Feb 28 05:20:00 crc kubenswrapper[5014]: I0228 05:20:00.152490 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdbd7594-dc62-4b56-a5f4-dc5c9d92cdd2" containerName="oc" Feb 28 05:20:00 crc kubenswrapper[5014]: I0228 05:20:00.153871 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537600-w7ql9" Feb 28 05:20:00 crc kubenswrapper[5014]: I0228 05:20:00.158146 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:20:00 crc kubenswrapper[5014]: I0228 05:20:00.159098 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:20:00 crc kubenswrapper[5014]: I0228 05:20:00.160492 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537600-w7ql9"] Feb 28 05:20:00 crc kubenswrapper[5014]: I0228 05:20:00.166340 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:20:00 crc kubenswrapper[5014]: I0228 05:20:00.306021 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdn44\" (UniqueName: \"kubernetes.io/projected/4d030132-c1a7-4e6a-97d5-c73a8505d92e-kube-api-access-wdn44\") pod \"auto-csr-approver-29537600-w7ql9\" (UID: \"4d030132-c1a7-4e6a-97d5-c73a8505d92e\") " pod="openshift-infra/auto-csr-approver-29537600-w7ql9" Feb 28 05:20:00 crc kubenswrapper[5014]: I0228 05:20:00.408551 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdn44\" (UniqueName: \"kubernetes.io/projected/4d030132-c1a7-4e6a-97d5-c73a8505d92e-kube-api-access-wdn44\") pod \"auto-csr-approver-29537600-w7ql9\" (UID: \"4d030132-c1a7-4e6a-97d5-c73a8505d92e\") " pod="openshift-infra/auto-csr-approver-29537600-w7ql9" Feb 28 05:20:00 crc kubenswrapper[5014]: I0228 05:20:00.447467 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdn44\" (UniqueName: \"kubernetes.io/projected/4d030132-c1a7-4e6a-97d5-c73a8505d92e-kube-api-access-wdn44\") pod \"auto-csr-approver-29537600-w7ql9\" (UID: \"4d030132-c1a7-4e6a-97d5-c73a8505d92e\") " pod="openshift-infra/auto-csr-approver-29537600-w7ql9" Feb 28 05:20:00 crc kubenswrapper[5014]: I0228 05:20:00.486951 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537600-w7ql9" Feb 28 05:20:00 crc kubenswrapper[5014]: I0228 05:20:00.963456 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537600-w7ql9"] Feb 28 05:20:01 crc kubenswrapper[5014]: I0228 05:20:01.470841 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537600-w7ql9" event={"ID":"4d030132-c1a7-4e6a-97d5-c73a8505d92e","Type":"ContainerStarted","Data":"90d7dfd0bc078c5ffbca278c6e303802592b01bdff0728c9b948b089de4bba35"} Feb 28 05:20:03 crc kubenswrapper[5014]: I0228 05:20:03.496090 5014 generic.go:334] "Generic (PLEG): container finished" podID="4d030132-c1a7-4e6a-97d5-c73a8505d92e" containerID="d6cbc90e4677bef25de47024655fb2510a741251ea2bce7309d212e532c9ff9e" exitCode=0 Feb 28 05:20:03 crc kubenswrapper[5014]: I0228 05:20:03.496158 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537600-w7ql9" event={"ID":"4d030132-c1a7-4e6a-97d5-c73a8505d92e","Type":"ContainerDied","Data":"d6cbc90e4677bef25de47024655fb2510a741251ea2bce7309d212e532c9ff9e"} Feb 28 05:20:04 crc kubenswrapper[5014]: I0228 05:20:04.928376 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537600-w7ql9" Feb 28 05:20:05 crc kubenswrapper[5014]: I0228 05:20:05.036178 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdn44\" (UniqueName: \"kubernetes.io/projected/4d030132-c1a7-4e6a-97d5-c73a8505d92e-kube-api-access-wdn44\") pod \"4d030132-c1a7-4e6a-97d5-c73a8505d92e\" (UID: \"4d030132-c1a7-4e6a-97d5-c73a8505d92e\") " Feb 28 05:20:05 crc kubenswrapper[5014]: I0228 05:20:05.042606 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d030132-c1a7-4e6a-97d5-c73a8505d92e-kube-api-access-wdn44" (OuterVolumeSpecName: "kube-api-access-wdn44") pod "4d030132-c1a7-4e6a-97d5-c73a8505d92e" (UID: "4d030132-c1a7-4e6a-97d5-c73a8505d92e"). InnerVolumeSpecName "kube-api-access-wdn44". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:20:05 crc kubenswrapper[5014]: I0228 05:20:05.139572 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdn44\" (UniqueName: \"kubernetes.io/projected/4d030132-c1a7-4e6a-97d5-c73a8505d92e-kube-api-access-wdn44\") on node \"crc\" DevicePath \"\"" Feb 28 05:20:05 crc kubenswrapper[5014]: I0228 05:20:05.517710 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537600-w7ql9" event={"ID":"4d030132-c1a7-4e6a-97d5-c73a8505d92e","Type":"ContainerDied","Data":"90d7dfd0bc078c5ffbca278c6e303802592b01bdff0728c9b948b089de4bba35"} Feb 28 05:20:05 crc kubenswrapper[5014]: I0228 05:20:05.518192 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90d7dfd0bc078c5ffbca278c6e303802592b01bdff0728c9b948b089de4bba35" Feb 28 05:20:05 crc kubenswrapper[5014]: I0228 05:20:05.517758 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537600-w7ql9" Feb 28 05:20:06 crc kubenswrapper[5014]: I0228 05:20:06.008270 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537594-99x2c"] Feb 28 05:20:06 crc kubenswrapper[5014]: I0228 05:20:06.018024 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537594-99x2c"] Feb 28 05:20:06 crc kubenswrapper[5014]: I0228 05:20:06.186136 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b5e4216-03d6-4fc8-93b1-9eb5cafbfc5e" path="/var/lib/kubelet/pods/9b5e4216-03d6-4fc8-93b1-9eb5cafbfc5e/volumes" Feb 28 05:20:17 crc kubenswrapper[5014]: I0228 05:20:17.408652 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7gkhh"] Feb 28 05:20:17 crc kubenswrapper[5014]: E0228 05:20:17.410162 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d030132-c1a7-4e6a-97d5-c73a8505d92e" containerName="oc" Feb 28 05:20:17 crc kubenswrapper[5014]: I0228 05:20:17.410194 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d030132-c1a7-4e6a-97d5-c73a8505d92e" containerName="oc" Feb 28 05:20:17 crc kubenswrapper[5014]: I0228 05:20:17.410606 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d030132-c1a7-4e6a-97d5-c73a8505d92e" containerName="oc" Feb 28 05:20:17 crc kubenswrapper[5014]: I0228 05:20:17.413073 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7gkhh" Feb 28 05:20:17 crc kubenswrapper[5014]: I0228 05:20:17.444321 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7gkhh"] Feb 28 05:20:17 crc kubenswrapper[5014]: I0228 05:20:17.520644 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvbs6\" (UniqueName: \"kubernetes.io/projected/ea7a21f8-c0fa-439e-a17b-1ec78e92569d-kube-api-access-wvbs6\") pod \"community-operators-7gkhh\" (UID: \"ea7a21f8-c0fa-439e-a17b-1ec78e92569d\") " pod="openshift-marketplace/community-operators-7gkhh" Feb 28 05:20:17 crc kubenswrapper[5014]: I0228 05:20:17.520706 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7a21f8-c0fa-439e-a17b-1ec78e92569d-utilities\") pod \"community-operators-7gkhh\" (UID: \"ea7a21f8-c0fa-439e-a17b-1ec78e92569d\") " pod="openshift-marketplace/community-operators-7gkhh" Feb 28 05:20:17 crc kubenswrapper[5014]: I0228 05:20:17.520784 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7a21f8-c0fa-439e-a17b-1ec78e92569d-catalog-content\") pod \"community-operators-7gkhh\" (UID: \"ea7a21f8-c0fa-439e-a17b-1ec78e92569d\") " pod="openshift-marketplace/community-operators-7gkhh" Feb 28 05:20:17 crc kubenswrapper[5014]: I0228 05:20:17.622331 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7a21f8-c0fa-439e-a17b-1ec78e92569d-catalog-content\") pod \"community-operators-7gkhh\" (UID: \"ea7a21f8-c0fa-439e-a17b-1ec78e92569d\") " pod="openshift-marketplace/community-operators-7gkhh" Feb 28 05:20:17 crc kubenswrapper[5014]: I0228 05:20:17.622482 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvbs6\" (UniqueName: \"kubernetes.io/projected/ea7a21f8-c0fa-439e-a17b-1ec78e92569d-kube-api-access-wvbs6\") pod \"community-operators-7gkhh\" (UID: \"ea7a21f8-c0fa-439e-a17b-1ec78e92569d\") " pod="openshift-marketplace/community-operators-7gkhh" Feb 28 05:20:17 crc kubenswrapper[5014]: I0228 05:20:17.622505 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7a21f8-c0fa-439e-a17b-1ec78e92569d-utilities\") pod \"community-operators-7gkhh\" (UID: \"ea7a21f8-c0fa-439e-a17b-1ec78e92569d\") " pod="openshift-marketplace/community-operators-7gkhh" Feb 28 05:20:17 crc kubenswrapper[5014]: I0228 05:20:17.622902 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7a21f8-c0fa-439e-a17b-1ec78e92569d-utilities\") pod \"community-operators-7gkhh\" (UID: \"ea7a21f8-c0fa-439e-a17b-1ec78e92569d\") " pod="openshift-marketplace/community-operators-7gkhh" Feb 28 05:20:17 crc kubenswrapper[5014]: I0228 05:20:17.622921 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7a21f8-c0fa-439e-a17b-1ec78e92569d-catalog-content\") pod \"community-operators-7gkhh\" (UID: \"ea7a21f8-c0fa-439e-a17b-1ec78e92569d\") " pod="openshift-marketplace/community-operators-7gkhh" Feb 28 05:20:17 crc kubenswrapper[5014]: I0228 05:20:17.643781 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvbs6\" (UniqueName: \"kubernetes.io/projected/ea7a21f8-c0fa-439e-a17b-1ec78e92569d-kube-api-access-wvbs6\") pod \"community-operators-7gkhh\" (UID: \"ea7a21f8-c0fa-439e-a17b-1ec78e92569d\") " pod="openshift-marketplace/community-operators-7gkhh" Feb 28 05:20:17 crc kubenswrapper[5014]: I0228 05:20:17.750279 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7gkhh" Feb 28 05:20:18 crc kubenswrapper[5014]: I0228 05:20:18.270361 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7gkhh"] Feb 28 05:20:18 crc kubenswrapper[5014]: I0228 05:20:18.641894 5014 generic.go:334] "Generic (PLEG): container finished" podID="ea7a21f8-c0fa-439e-a17b-1ec78e92569d" containerID="1cf87ba9e915d6f438a6e4f4ccd05e37e451fa90e36884013b54366df2c086d4" exitCode=0 Feb 28 05:20:18 crc kubenswrapper[5014]: I0228 05:20:18.641988 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gkhh" event={"ID":"ea7a21f8-c0fa-439e-a17b-1ec78e92569d","Type":"ContainerDied","Data":"1cf87ba9e915d6f438a6e4f4ccd05e37e451fa90e36884013b54366df2c086d4"} Feb 28 05:20:18 crc kubenswrapper[5014]: I0228 05:20:18.642194 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gkhh" event={"ID":"ea7a21f8-c0fa-439e-a17b-1ec78e92569d","Type":"ContainerStarted","Data":"572b2149ade28c4fd9b72914aa8f937f2e6ac069263e72ad4a14daac5afbc854"} Feb 28 05:20:19 crc kubenswrapper[5014]: I0228 05:20:19.652496 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gkhh" event={"ID":"ea7a21f8-c0fa-439e-a17b-1ec78e92569d","Type":"ContainerStarted","Data":"2465a1114b11f9b2187353ccde1a4a5399978ececad561effe61a4bd65b2a263"} Feb 28 05:20:20 crc kubenswrapper[5014]: I0228 05:20:20.661694 5014 generic.go:334] "Generic (PLEG): container finished" podID="ea7a21f8-c0fa-439e-a17b-1ec78e92569d" containerID="2465a1114b11f9b2187353ccde1a4a5399978ececad561effe61a4bd65b2a263" exitCode=0 Feb 28 05:20:20 crc kubenswrapper[5014]: I0228 05:20:20.661766 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gkhh" event={"ID":"ea7a21f8-c0fa-439e-a17b-1ec78e92569d","Type":"ContainerDied","Data":"2465a1114b11f9b2187353ccde1a4a5399978ececad561effe61a4bd65b2a263"} Feb 28 05:20:21 crc kubenswrapper[5014]: I0228 05:20:21.678121 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gkhh" event={"ID":"ea7a21f8-c0fa-439e-a17b-1ec78e92569d","Type":"ContainerStarted","Data":"f6b6dd761ed8ef352a84a0c214f9a59803c634e4d0bf0a6d16be4bda58d72d83"} Feb 28 05:20:21 crc kubenswrapper[5014]: I0228 05:20:21.705309 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7gkhh" podStartSLOduration=2.276189395 podStartE2EDuration="4.705289822s" podCreationTimestamp="2026-02-28 05:20:17 +0000 UTC" firstStartedPulling="2026-02-28 05:20:18.643752662 +0000 UTC m=+2807.313878572" lastFinishedPulling="2026-02-28 05:20:21.072853079 +0000 UTC m=+2809.742978999" observedRunningTime="2026-02-28 05:20:21.700852951 +0000 UTC m=+2810.370978861" watchObservedRunningTime="2026-02-28 05:20:21.705289822 +0000 UTC m=+2810.375415732" Feb 28 05:20:27 crc kubenswrapper[5014]: I0228 05:20:27.750787 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7gkhh" Feb 28 05:20:27 crc kubenswrapper[5014]: I0228 05:20:27.751524 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7gkhh" Feb 28 05:20:27 crc kubenswrapper[5014]: I0228 05:20:27.796150 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7gkhh" Feb 28 05:20:28 crc kubenswrapper[5014]: I0228 05:20:28.801092 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7gkhh" Feb 28 05:20:28 crc kubenswrapper[5014]: I0228 05:20:28.855899 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7gkhh"] Feb 28 05:20:30 crc kubenswrapper[5014]: I0228 05:20:30.760482 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7gkhh" podUID="ea7a21f8-c0fa-439e-a17b-1ec78e92569d" containerName="registry-server" containerID="cri-o://f6b6dd761ed8ef352a84a0c214f9a59803c634e4d0bf0a6d16be4bda58d72d83" gracePeriod=2 Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.273602 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7gkhh" Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.462577 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7a21f8-c0fa-439e-a17b-1ec78e92569d-catalog-content\") pod \"ea7a21f8-c0fa-439e-a17b-1ec78e92569d\" (UID: \"ea7a21f8-c0fa-439e-a17b-1ec78e92569d\") " Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.464767 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7a21f8-c0fa-439e-a17b-1ec78e92569d-utilities\") pod \"ea7a21f8-c0fa-439e-a17b-1ec78e92569d\" (UID: \"ea7a21f8-c0fa-439e-a17b-1ec78e92569d\") " Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.465042 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvbs6\" (UniqueName: \"kubernetes.io/projected/ea7a21f8-c0fa-439e-a17b-1ec78e92569d-kube-api-access-wvbs6\") pod \"ea7a21f8-c0fa-439e-a17b-1ec78e92569d\" (UID: \"ea7a21f8-c0fa-439e-a17b-1ec78e92569d\") " Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.465687 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea7a21f8-c0fa-439e-a17b-1ec78e92569d-utilities" (OuterVolumeSpecName: "utilities") pod "ea7a21f8-c0fa-439e-a17b-1ec78e92569d" (UID: "ea7a21f8-c0fa-439e-a17b-1ec78e92569d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.472775 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea7a21f8-c0fa-439e-a17b-1ec78e92569d-kube-api-access-wvbs6" (OuterVolumeSpecName: "kube-api-access-wvbs6") pod "ea7a21f8-c0fa-439e-a17b-1ec78e92569d" (UID: "ea7a21f8-c0fa-439e-a17b-1ec78e92569d"). InnerVolumeSpecName "kube-api-access-wvbs6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.541926 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea7a21f8-c0fa-439e-a17b-1ec78e92569d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea7a21f8-c0fa-439e-a17b-1ec78e92569d" (UID: "ea7a21f8-c0fa-439e-a17b-1ec78e92569d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.567624 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvbs6\" (UniqueName: \"kubernetes.io/projected/ea7a21f8-c0fa-439e-a17b-1ec78e92569d-kube-api-access-wvbs6\") on node \"crc\" DevicePath \"\"" Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.567655 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea7a21f8-c0fa-439e-a17b-1ec78e92569d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.567665 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea7a21f8-c0fa-439e-a17b-1ec78e92569d-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.774362 5014 generic.go:334] "Generic (PLEG): container finished" podID="ea7a21f8-c0fa-439e-a17b-1ec78e92569d" containerID="f6b6dd761ed8ef352a84a0c214f9a59803c634e4d0bf0a6d16be4bda58d72d83" exitCode=0 Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.774423 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gkhh" event={"ID":"ea7a21f8-c0fa-439e-a17b-1ec78e92569d","Type":"ContainerDied","Data":"f6b6dd761ed8ef352a84a0c214f9a59803c634e4d0bf0a6d16be4bda58d72d83"} Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.774488 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7gkhh" event={"ID":"ea7a21f8-c0fa-439e-a17b-1ec78e92569d","Type":"ContainerDied","Data":"572b2149ade28c4fd9b72914aa8f937f2e6ac069263e72ad4a14daac5afbc854"} Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.774519 5014 scope.go:117] "RemoveContainer" containerID="f6b6dd761ed8ef352a84a0c214f9a59803c634e4d0bf0a6d16be4bda58d72d83" Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.774439 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7gkhh" Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.825073 5014 scope.go:117] "RemoveContainer" containerID="2465a1114b11f9b2187353ccde1a4a5399978ececad561effe61a4bd65b2a263" Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.827736 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7gkhh"] Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.838074 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7gkhh"] Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.864620 5014 scope.go:117] "RemoveContainer" containerID="1cf87ba9e915d6f438a6e4f4ccd05e37e451fa90e36884013b54366df2c086d4" Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.908577 5014 scope.go:117] "RemoveContainer" containerID="f6b6dd761ed8ef352a84a0c214f9a59803c634e4d0bf0a6d16be4bda58d72d83" Feb 28 05:20:31 crc kubenswrapper[5014]: E0228 05:20:31.909257 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6b6dd761ed8ef352a84a0c214f9a59803c634e4d0bf0a6d16be4bda58d72d83\": container with ID starting with f6b6dd761ed8ef352a84a0c214f9a59803c634e4d0bf0a6d16be4bda58d72d83 not found: ID does not exist" containerID="f6b6dd761ed8ef352a84a0c214f9a59803c634e4d0bf0a6d16be4bda58d72d83" Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.909296 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b6dd761ed8ef352a84a0c214f9a59803c634e4d0bf0a6d16be4bda58d72d83"} err="failed to get container status \"f6b6dd761ed8ef352a84a0c214f9a59803c634e4d0bf0a6d16be4bda58d72d83\": rpc error: code = NotFound desc = could not find container \"f6b6dd761ed8ef352a84a0c214f9a59803c634e4d0bf0a6d16be4bda58d72d83\": container with ID starting with f6b6dd761ed8ef352a84a0c214f9a59803c634e4d0bf0a6d16be4bda58d72d83 not found: ID does not exist" Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.909320 5014 scope.go:117] "RemoveContainer" containerID="2465a1114b11f9b2187353ccde1a4a5399978ececad561effe61a4bd65b2a263" Feb 28 05:20:31 crc kubenswrapper[5014]: E0228 05:20:31.909565 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2465a1114b11f9b2187353ccde1a4a5399978ececad561effe61a4bd65b2a263\": container with ID starting with 2465a1114b11f9b2187353ccde1a4a5399978ececad561effe61a4bd65b2a263 not found: ID does not exist" containerID="2465a1114b11f9b2187353ccde1a4a5399978ececad561effe61a4bd65b2a263" Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.909586 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2465a1114b11f9b2187353ccde1a4a5399978ececad561effe61a4bd65b2a263"} err="failed to get container status \"2465a1114b11f9b2187353ccde1a4a5399978ececad561effe61a4bd65b2a263\": rpc error: code = NotFound desc = could not find container \"2465a1114b11f9b2187353ccde1a4a5399978ececad561effe61a4bd65b2a263\": container with ID starting with 2465a1114b11f9b2187353ccde1a4a5399978ececad561effe61a4bd65b2a263 not found: ID does not exist" Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.909602 5014 scope.go:117] "RemoveContainer" containerID="1cf87ba9e915d6f438a6e4f4ccd05e37e451fa90e36884013b54366df2c086d4" Feb 28 05:20:31 crc kubenswrapper[5014]: E0228 05:20:31.909795 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cf87ba9e915d6f438a6e4f4ccd05e37e451fa90e36884013b54366df2c086d4\": container with ID starting with 1cf87ba9e915d6f438a6e4f4ccd05e37e451fa90e36884013b54366df2c086d4 not found: ID does not exist" containerID="1cf87ba9e915d6f438a6e4f4ccd05e37e451fa90e36884013b54366df2c086d4" Feb 28 05:20:31 crc kubenswrapper[5014]: I0228 05:20:31.909857 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cf87ba9e915d6f438a6e4f4ccd05e37e451fa90e36884013b54366df2c086d4"} err="failed to get container status \"1cf87ba9e915d6f438a6e4f4ccd05e37e451fa90e36884013b54366df2c086d4\": rpc error: code = NotFound desc = could not find container \"1cf87ba9e915d6f438a6e4f4ccd05e37e451fa90e36884013b54366df2c086d4\": container with ID starting with 1cf87ba9e915d6f438a6e4f4ccd05e37e451fa90e36884013b54366df2c086d4 not found: ID does not exist" Feb 28 05:20:32 crc kubenswrapper[5014]: I0228 05:20:32.185070 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea7a21f8-c0fa-439e-a17b-1ec78e92569d" path="/var/lib/kubelet/pods/ea7a21f8-c0fa-439e-a17b-1ec78e92569d/volumes" Feb 28 05:20:45 crc kubenswrapper[5014]: I0228 05:20:45.706160 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:20:45 crc kubenswrapper[5014]: I0228 05:20:45.706723 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:20:58 crc kubenswrapper[5014]: I0228 05:20:58.243250 5014 scope.go:117] "RemoveContainer" containerID="0972b63af0684bbd7174214e5fdb8ae552e79cf1735d152803f41db87e79e5a2" Feb 28 05:21:15 crc kubenswrapper[5014]: I0228 05:21:15.706720 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:21:15 crc kubenswrapper[5014]: I0228 05:21:15.707362 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:21:45 crc kubenswrapper[5014]: I0228 05:21:45.711902 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:21:45 crc kubenswrapper[5014]: I0228 05:21:45.712509 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:21:45 crc kubenswrapper[5014]: I0228 05:21:45.712593 5014 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 05:21:45 crc kubenswrapper[5014]: I0228 05:21:45.713342 5014 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c456e5f386c62d2fe58c1ccf175f9bdaa457a2719c956898c0819998d2ac4b45"} pod="openshift-machine-config-operator/machine-config-daemon-cct62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 05:21:45 crc kubenswrapper[5014]: I0228 05:21:45.713408 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" containerID="cri-o://c456e5f386c62d2fe58c1ccf175f9bdaa457a2719c956898c0819998d2ac4b45" gracePeriod=600 Feb 28 05:21:45 crc kubenswrapper[5014]: I0228 05:21:45.948056 5014 generic.go:334] "Generic (PLEG): container finished" podID="6aad0009-d904-48f8-8e30-82205907ece1" containerID="c456e5f386c62d2fe58c1ccf175f9bdaa457a2719c956898c0819998d2ac4b45" exitCode=0 Feb 28 05:21:45 crc kubenswrapper[5014]: I0228 05:21:45.948326 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerDied","Data":"c456e5f386c62d2fe58c1ccf175f9bdaa457a2719c956898c0819998d2ac4b45"} Feb 28 05:21:45 crc kubenswrapper[5014]: I0228 05:21:45.948358 5014 scope.go:117] "RemoveContainer" containerID="1bc4036cea0fa1b63db1f1c42bfc323f3ba1e3b4ac866f502b2a3d0a906681f1" Feb 28 05:21:46 crc kubenswrapper[5014]: I0228 05:21:46.959018 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerStarted","Data":"192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55"} Feb 28 05:22:00 crc kubenswrapper[5014]: I0228 05:22:00.145228 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537602-wgkjt"] Feb 28 05:22:00 crc kubenswrapper[5014]: E0228 05:22:00.146292 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7a21f8-c0fa-439e-a17b-1ec78e92569d" containerName="extract-content" Feb 28 05:22:00 crc kubenswrapper[5014]: I0228 05:22:00.146306 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7a21f8-c0fa-439e-a17b-1ec78e92569d" containerName="extract-content" Feb 28 05:22:00 crc kubenswrapper[5014]: E0228 05:22:00.146331 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7a21f8-c0fa-439e-a17b-1ec78e92569d" containerName="registry-server" Feb 28 05:22:00 crc kubenswrapper[5014]: I0228 05:22:00.146337 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7a21f8-c0fa-439e-a17b-1ec78e92569d" containerName="registry-server" Feb 28 05:22:00 crc kubenswrapper[5014]: E0228 05:22:00.146349 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7a21f8-c0fa-439e-a17b-1ec78e92569d" containerName="extract-utilities" Feb 28 05:22:00 crc kubenswrapper[5014]: I0228 05:22:00.146358 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7a21f8-c0fa-439e-a17b-1ec78e92569d" containerName="extract-utilities" Feb 28 05:22:00 crc kubenswrapper[5014]: I0228 05:22:00.146564 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea7a21f8-c0fa-439e-a17b-1ec78e92569d" containerName="registry-server" Feb 28 05:22:00 crc kubenswrapper[5014]: I0228 05:22:00.147278 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537602-wgkjt" Feb 28 05:22:00 crc kubenswrapper[5014]: I0228 05:22:00.150193 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:22:00 crc kubenswrapper[5014]: I0228 05:22:00.150295 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:22:00 crc kubenswrapper[5014]: I0228 05:22:00.150517 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:22:00 crc kubenswrapper[5014]: I0228 05:22:00.159070 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537602-wgkjt"] Feb 28 05:22:00 crc kubenswrapper[5014]: I0228 05:22:00.307958 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s99c9\" (UniqueName: \"kubernetes.io/projected/33996db3-13d8-4fc6-a95e-de8b2582ddf7-kube-api-access-s99c9\") pod \"auto-csr-approver-29537602-wgkjt\" (UID: \"33996db3-13d8-4fc6-a95e-de8b2582ddf7\") " pod="openshift-infra/auto-csr-approver-29537602-wgkjt" Feb 28 05:22:00 crc kubenswrapper[5014]: I0228 05:22:00.409596 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s99c9\" (UniqueName: \"kubernetes.io/projected/33996db3-13d8-4fc6-a95e-de8b2582ddf7-kube-api-access-s99c9\") pod \"auto-csr-approver-29537602-wgkjt\" (UID: \"33996db3-13d8-4fc6-a95e-de8b2582ddf7\") " pod="openshift-infra/auto-csr-approver-29537602-wgkjt" Feb 28 05:22:00 crc kubenswrapper[5014]: I0228 05:22:00.470535 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s99c9\" (UniqueName: \"kubernetes.io/projected/33996db3-13d8-4fc6-a95e-de8b2582ddf7-kube-api-access-s99c9\") pod \"auto-csr-approver-29537602-wgkjt\" (UID: \"33996db3-13d8-4fc6-a95e-de8b2582ddf7\") " pod="openshift-infra/auto-csr-approver-29537602-wgkjt" Feb 28 05:22:00 crc kubenswrapper[5014]: I0228 05:22:00.484284 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537602-wgkjt" Feb 28 05:22:00 crc kubenswrapper[5014]: I0228 05:22:00.981985 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537602-wgkjt"] Feb 28 05:22:00 crc kubenswrapper[5014]: W0228 05:22:00.985599 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod33996db3_13d8_4fc6_a95e_de8b2582ddf7.slice/crio-982ea7d62d4e21648fdf72f3aea06d0ea978594b63a63d319bb00dbb18bd86ce WatchSource:0}: Error finding container 982ea7d62d4e21648fdf72f3aea06d0ea978594b63a63d319bb00dbb18bd86ce: Status 404 returned error can't find the container with id 982ea7d62d4e21648fdf72f3aea06d0ea978594b63a63d319bb00dbb18bd86ce Feb 28 05:22:01 crc kubenswrapper[5014]: I0228 05:22:01.102401 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537602-wgkjt" event={"ID":"33996db3-13d8-4fc6-a95e-de8b2582ddf7","Type":"ContainerStarted","Data":"982ea7d62d4e21648fdf72f3aea06d0ea978594b63a63d319bb00dbb18bd86ce"} Feb 28 05:22:03 crc kubenswrapper[5014]: I0228 05:22:03.356371 5014 generic.go:334] "Generic (PLEG): container finished" podID="33996db3-13d8-4fc6-a95e-de8b2582ddf7" containerID="a4cb4a8486521786d2b9f49032687480ceda944606063326dbeb1ab6411b7726" exitCode=0 Feb 28 05:22:03 crc kubenswrapper[5014]: I0228 05:22:03.357244 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537602-wgkjt" event={"ID":"33996db3-13d8-4fc6-a95e-de8b2582ddf7","Type":"ContainerDied","Data":"a4cb4a8486521786d2b9f49032687480ceda944606063326dbeb1ab6411b7726"} Feb 28 05:22:04 crc kubenswrapper[5014]: I0228 05:22:04.781150 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537602-wgkjt" Feb 28 05:22:04 crc kubenswrapper[5014]: I0228 05:22:04.870143 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s99c9\" (UniqueName: \"kubernetes.io/projected/33996db3-13d8-4fc6-a95e-de8b2582ddf7-kube-api-access-s99c9\") pod \"33996db3-13d8-4fc6-a95e-de8b2582ddf7\" (UID: \"33996db3-13d8-4fc6-a95e-de8b2582ddf7\") " Feb 28 05:22:04 crc kubenswrapper[5014]: I0228 05:22:04.879890 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33996db3-13d8-4fc6-a95e-de8b2582ddf7-kube-api-access-s99c9" (OuterVolumeSpecName: "kube-api-access-s99c9") pod "33996db3-13d8-4fc6-a95e-de8b2582ddf7" (UID: "33996db3-13d8-4fc6-a95e-de8b2582ddf7"). InnerVolumeSpecName "kube-api-access-s99c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:22:04 crc kubenswrapper[5014]: I0228 05:22:04.973048 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s99c9\" (UniqueName: \"kubernetes.io/projected/33996db3-13d8-4fc6-a95e-de8b2582ddf7-kube-api-access-s99c9\") on node \"crc\" DevicePath \"\"" Feb 28 05:22:05 crc kubenswrapper[5014]: I0228 05:22:05.377384 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537602-wgkjt" event={"ID":"33996db3-13d8-4fc6-a95e-de8b2582ddf7","Type":"ContainerDied","Data":"982ea7d62d4e21648fdf72f3aea06d0ea978594b63a63d319bb00dbb18bd86ce"} Feb 28 05:22:05 crc kubenswrapper[5014]: I0228 05:22:05.377686 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="982ea7d62d4e21648fdf72f3aea06d0ea978594b63a63d319bb00dbb18bd86ce" Feb 28 05:22:05 crc kubenswrapper[5014]: I0228 05:22:05.377752 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537602-wgkjt" Feb 28 05:22:05 crc kubenswrapper[5014]: I0228 05:22:05.872306 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537596-78pqt"] Feb 28 05:22:05 crc kubenswrapper[5014]: I0228 05:22:05.882766 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537596-78pqt"] Feb 28 05:22:06 crc kubenswrapper[5014]: I0228 05:22:06.185032 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce144927-6302-48e8-a467-a4cdb3f5f931" path="/var/lib/kubelet/pods/ce144927-6302-48e8-a467-a4cdb3f5f931/volumes" Feb 28 05:22:24 crc kubenswrapper[5014]: I0228 05:22:24.199719 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xf8tb"] Feb 28 05:22:24 crc kubenswrapper[5014]: E0228 05:22:24.200747 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33996db3-13d8-4fc6-a95e-de8b2582ddf7" containerName="oc" Feb 28 05:22:24 crc kubenswrapper[5014]: I0228 05:22:24.200761 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="33996db3-13d8-4fc6-a95e-de8b2582ddf7" containerName="oc" Feb 28 05:22:24 crc kubenswrapper[5014]: I0228 05:22:24.200959 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="33996db3-13d8-4fc6-a95e-de8b2582ddf7" containerName="oc" Feb 28 05:22:24 crc kubenswrapper[5014]: I0228 05:22:24.202266 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xf8tb" Feb 28 05:22:24 crc kubenswrapper[5014]: I0228 05:22:24.209485 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xf8tb"] Feb 28 05:22:24 crc kubenswrapper[5014]: I0228 05:22:24.277428 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5e1adae-05bc-4292-bdee-d18bfe732b32-utilities\") pod \"certified-operators-xf8tb\" (UID: \"a5e1adae-05bc-4292-bdee-d18bfe732b32\") " pod="openshift-marketplace/certified-operators-xf8tb" Feb 28 05:22:24 crc kubenswrapper[5014]: I0228 05:22:24.277633 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5e1adae-05bc-4292-bdee-d18bfe732b32-catalog-content\") pod \"certified-operators-xf8tb\" (UID: \"a5e1adae-05bc-4292-bdee-d18bfe732b32\") " pod="openshift-marketplace/certified-operators-xf8tb" Feb 28 05:22:24 crc kubenswrapper[5014]: I0228 05:22:24.277671 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zk2k\" (UniqueName: \"kubernetes.io/projected/a5e1adae-05bc-4292-bdee-d18bfe732b32-kube-api-access-2zk2k\") pod \"certified-operators-xf8tb\" (UID: \"a5e1adae-05bc-4292-bdee-d18bfe732b32\") " pod="openshift-marketplace/certified-operators-xf8tb" Feb 28 05:22:24 crc kubenswrapper[5014]: I0228 05:22:24.379791 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5e1adae-05bc-4292-bdee-d18bfe732b32-catalog-content\") pod \"certified-operators-xf8tb\" (UID: \"a5e1adae-05bc-4292-bdee-d18bfe732b32\") " pod="openshift-marketplace/certified-operators-xf8tb" Feb 28 05:22:24 crc kubenswrapper[5014]: I0228 05:22:24.379861 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zk2k\" (UniqueName: \"kubernetes.io/projected/a5e1adae-05bc-4292-bdee-d18bfe732b32-kube-api-access-2zk2k\") pod \"certified-operators-xf8tb\" (UID: \"a5e1adae-05bc-4292-bdee-d18bfe732b32\") " pod="openshift-marketplace/certified-operators-xf8tb" Feb 28 05:22:24 crc kubenswrapper[5014]: I0228 05:22:24.379950 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5e1adae-05bc-4292-bdee-d18bfe732b32-utilities\") pod \"certified-operators-xf8tb\" (UID: \"a5e1adae-05bc-4292-bdee-d18bfe732b32\") " pod="openshift-marketplace/certified-operators-xf8tb" Feb 28 05:22:24 crc kubenswrapper[5014]: I0228 05:22:24.380348 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5e1adae-05bc-4292-bdee-d18bfe732b32-catalog-content\") pod \"certified-operators-xf8tb\" (UID: \"a5e1adae-05bc-4292-bdee-d18bfe732b32\") " pod="openshift-marketplace/certified-operators-xf8tb" Feb 28 05:22:24 crc kubenswrapper[5014]: I0228 05:22:24.380381 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5e1adae-05bc-4292-bdee-d18bfe732b32-utilities\") pod \"certified-operators-xf8tb\" (UID: \"a5e1adae-05bc-4292-bdee-d18bfe732b32\") " pod="openshift-marketplace/certified-operators-xf8tb" Feb 28 05:22:24 crc kubenswrapper[5014]: I0228 05:22:24.412877 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zk2k\" (UniqueName: \"kubernetes.io/projected/a5e1adae-05bc-4292-bdee-d18bfe732b32-kube-api-access-2zk2k\") pod \"certified-operators-xf8tb\" (UID: \"a5e1adae-05bc-4292-bdee-d18bfe732b32\") " pod="openshift-marketplace/certified-operators-xf8tb" Feb 28 05:22:24 crc kubenswrapper[5014]: I0228 05:22:24.522757 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xf8tb" Feb 28 05:22:25 crc kubenswrapper[5014]: I0228 05:22:25.088116 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xf8tb"] Feb 28 05:22:25 crc kubenswrapper[5014]: I0228 05:22:25.583227 5014 generic.go:334] "Generic (PLEG): container finished" podID="a5e1adae-05bc-4292-bdee-d18bfe732b32" containerID="fab1039a20ecb1e5eacab4f3d7679ca1387db6148ca1e2d47e8bd58a2efe19ee" exitCode=0 Feb 28 05:22:25 crc kubenswrapper[5014]: I0228 05:22:25.583278 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xf8tb" event={"ID":"a5e1adae-05bc-4292-bdee-d18bfe732b32","Type":"ContainerDied","Data":"fab1039a20ecb1e5eacab4f3d7679ca1387db6148ca1e2d47e8bd58a2efe19ee"} Feb 28 05:22:25 crc kubenswrapper[5014]: I0228 05:22:25.583525 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xf8tb" event={"ID":"a5e1adae-05bc-4292-bdee-d18bfe732b32","Type":"ContainerStarted","Data":"86e5def1067f3a074192f476d830ccd70169fc6f6ac6b2f8b394cf8232db6dc8"} Feb 28 05:22:26 crc kubenswrapper[5014]: I0228 05:22:26.594057 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xf8tb" event={"ID":"a5e1adae-05bc-4292-bdee-d18bfe732b32","Type":"ContainerStarted","Data":"70d5806709c5742ddd7f9bea9ac0fcf3a0d3e394ca429d6c43ae60ae93d27d95"} Feb 28 05:22:27 crc kubenswrapper[5014]: I0228 05:22:27.607955 5014 generic.go:334] "Generic (PLEG): container finished" podID="a5e1adae-05bc-4292-bdee-d18bfe732b32" containerID="70d5806709c5742ddd7f9bea9ac0fcf3a0d3e394ca429d6c43ae60ae93d27d95" exitCode=0 Feb 28 05:22:27 crc kubenswrapper[5014]: I0228 05:22:27.608117 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xf8tb" event={"ID":"a5e1adae-05bc-4292-bdee-d18bfe732b32","Type":"ContainerDied","Data":"70d5806709c5742ddd7f9bea9ac0fcf3a0d3e394ca429d6c43ae60ae93d27d95"} Feb 28 05:22:28 crc kubenswrapper[5014]: I0228 05:22:28.620602 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xf8tb" event={"ID":"a5e1adae-05bc-4292-bdee-d18bfe732b32","Type":"ContainerStarted","Data":"90ada7c5236d998e3f2e25b3ebd36a30d0660a9b9037061765dd0f287662f2c0"} Feb 28 05:22:28 crc kubenswrapper[5014]: I0228 05:22:28.651058 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xf8tb" podStartSLOduration=2.221027707 podStartE2EDuration="4.651036498s" podCreationTimestamp="2026-02-28 05:22:24 +0000 UTC" firstStartedPulling="2026-02-28 05:22:25.585073828 +0000 UTC m=+2934.255199738" lastFinishedPulling="2026-02-28 05:22:28.015082619 +0000 UTC m=+2936.685208529" observedRunningTime="2026-02-28 05:22:28.640995946 +0000 UTC m=+2937.311121856" watchObservedRunningTime="2026-02-28 05:22:28.651036498 +0000 UTC m=+2937.321162408" Feb 28 05:22:34 crc kubenswrapper[5014]: I0228 05:22:34.523236 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xf8tb" Feb 28 05:22:34 crc kubenswrapper[5014]: I0228 05:22:34.524748 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xf8tb" Feb 28 05:22:34 crc kubenswrapper[5014]: I0228 05:22:34.579974 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xf8tb" Feb 28 05:22:34 crc kubenswrapper[5014]: I0228 05:22:34.727511 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xf8tb" Feb 28 05:22:34 crc kubenswrapper[5014]: I0228 05:22:34.818545 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xf8tb"] Feb 28 05:22:36 crc kubenswrapper[5014]: I0228 05:22:36.698130 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xf8tb" podUID="a5e1adae-05bc-4292-bdee-d18bfe732b32" containerName="registry-server" containerID="cri-o://90ada7c5236d998e3f2e25b3ebd36a30d0660a9b9037061765dd0f287662f2c0" gracePeriod=2 Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.211376 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xf8tb" Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.240474 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5e1adae-05bc-4292-bdee-d18bfe732b32-utilities" (OuterVolumeSpecName: "utilities") pod "a5e1adae-05bc-4292-bdee-d18bfe732b32" (UID: "a5e1adae-05bc-4292-bdee-d18bfe732b32"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.240945 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5e1adae-05bc-4292-bdee-d18bfe732b32-utilities\") pod \"a5e1adae-05bc-4292-bdee-d18bfe732b32\" (UID: \"a5e1adae-05bc-4292-bdee-d18bfe732b32\") " Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.241055 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zk2k\" (UniqueName: \"kubernetes.io/projected/a5e1adae-05bc-4292-bdee-d18bfe732b32-kube-api-access-2zk2k\") pod \"a5e1adae-05bc-4292-bdee-d18bfe732b32\" (UID: \"a5e1adae-05bc-4292-bdee-d18bfe732b32\") " Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.241113 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5e1adae-05bc-4292-bdee-d18bfe732b32-catalog-content\") pod \"a5e1adae-05bc-4292-bdee-d18bfe732b32\" (UID: \"a5e1adae-05bc-4292-bdee-d18bfe732b32\") " Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.252199 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5e1adae-05bc-4292-bdee-d18bfe732b32-kube-api-access-2zk2k" (OuterVolumeSpecName: "kube-api-access-2zk2k") pod "a5e1adae-05bc-4292-bdee-d18bfe732b32" (UID: "a5e1adae-05bc-4292-bdee-d18bfe732b32"). InnerVolumeSpecName "kube-api-access-2zk2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.314945 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5e1adae-05bc-4292-bdee-d18bfe732b32-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a5e1adae-05bc-4292-bdee-d18bfe732b32" (UID: "a5e1adae-05bc-4292-bdee-d18bfe732b32"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.343553 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a5e1adae-05bc-4292-bdee-d18bfe732b32-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.343601 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zk2k\" (UniqueName: \"kubernetes.io/projected/a5e1adae-05bc-4292-bdee-d18bfe732b32-kube-api-access-2zk2k\") on node \"crc\" DevicePath \"\"" Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.343619 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a5e1adae-05bc-4292-bdee-d18bfe732b32-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.712018 5014 generic.go:334] "Generic (PLEG): container finished" podID="a5e1adae-05bc-4292-bdee-d18bfe732b32" containerID="90ada7c5236d998e3f2e25b3ebd36a30d0660a9b9037061765dd0f287662f2c0" exitCode=0 Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.712092 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xf8tb" Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.712132 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xf8tb" event={"ID":"a5e1adae-05bc-4292-bdee-d18bfe732b32","Type":"ContainerDied","Data":"90ada7c5236d998e3f2e25b3ebd36a30d0660a9b9037061765dd0f287662f2c0"} Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.712536 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xf8tb" event={"ID":"a5e1adae-05bc-4292-bdee-d18bfe732b32","Type":"ContainerDied","Data":"86e5def1067f3a074192f476d830ccd70169fc6f6ac6b2f8b394cf8232db6dc8"} Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.712581 5014 scope.go:117] "RemoveContainer" containerID="90ada7c5236d998e3f2e25b3ebd36a30d0660a9b9037061765dd0f287662f2c0" Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.742254 5014 scope.go:117] "RemoveContainer" containerID="70d5806709c5742ddd7f9bea9ac0fcf3a0d3e394ca429d6c43ae60ae93d27d95" Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.779109 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xf8tb"] Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.787186 5014 scope.go:117] "RemoveContainer" containerID="fab1039a20ecb1e5eacab4f3d7679ca1387db6148ca1e2d47e8bd58a2efe19ee" Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.787733 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xf8tb"] Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.844104 5014 scope.go:117] "RemoveContainer" containerID="90ada7c5236d998e3f2e25b3ebd36a30d0660a9b9037061765dd0f287662f2c0" Feb 28 05:22:37 crc kubenswrapper[5014]: E0228 05:22:37.844589 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90ada7c5236d998e3f2e25b3ebd36a30d0660a9b9037061765dd0f287662f2c0\": container with ID starting with 90ada7c5236d998e3f2e25b3ebd36a30d0660a9b9037061765dd0f287662f2c0 not found: ID does not exist" containerID="90ada7c5236d998e3f2e25b3ebd36a30d0660a9b9037061765dd0f287662f2c0" Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.844631 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90ada7c5236d998e3f2e25b3ebd36a30d0660a9b9037061765dd0f287662f2c0"} err="failed to get container status \"90ada7c5236d998e3f2e25b3ebd36a30d0660a9b9037061765dd0f287662f2c0\": rpc error: code = NotFound desc = could not find container \"90ada7c5236d998e3f2e25b3ebd36a30d0660a9b9037061765dd0f287662f2c0\": container with ID starting with 90ada7c5236d998e3f2e25b3ebd36a30d0660a9b9037061765dd0f287662f2c0 not found: ID does not exist" Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.844658 5014 scope.go:117] "RemoveContainer" containerID="70d5806709c5742ddd7f9bea9ac0fcf3a0d3e394ca429d6c43ae60ae93d27d95" Feb 28 05:22:37 crc kubenswrapper[5014]: E0228 05:22:37.845068 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70d5806709c5742ddd7f9bea9ac0fcf3a0d3e394ca429d6c43ae60ae93d27d95\": container with ID starting with 70d5806709c5742ddd7f9bea9ac0fcf3a0d3e394ca429d6c43ae60ae93d27d95 not found: ID does not exist" containerID="70d5806709c5742ddd7f9bea9ac0fcf3a0d3e394ca429d6c43ae60ae93d27d95" Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.845185 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70d5806709c5742ddd7f9bea9ac0fcf3a0d3e394ca429d6c43ae60ae93d27d95"} err="failed to get container status \"70d5806709c5742ddd7f9bea9ac0fcf3a0d3e394ca429d6c43ae60ae93d27d95\": rpc error: code = NotFound desc = could not find container \"70d5806709c5742ddd7f9bea9ac0fcf3a0d3e394ca429d6c43ae60ae93d27d95\": container with ID starting with 70d5806709c5742ddd7f9bea9ac0fcf3a0d3e394ca429d6c43ae60ae93d27d95 not found: ID does not exist" Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.845230 5014 scope.go:117] "RemoveContainer" containerID="fab1039a20ecb1e5eacab4f3d7679ca1387db6148ca1e2d47e8bd58a2efe19ee" Feb 28 05:22:37 crc kubenswrapper[5014]: E0228 05:22:37.845689 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fab1039a20ecb1e5eacab4f3d7679ca1387db6148ca1e2d47e8bd58a2efe19ee\": container with ID starting with fab1039a20ecb1e5eacab4f3d7679ca1387db6148ca1e2d47e8bd58a2efe19ee not found: ID does not exist" containerID="fab1039a20ecb1e5eacab4f3d7679ca1387db6148ca1e2d47e8bd58a2efe19ee" Feb 28 05:22:37 crc kubenswrapper[5014]: I0228 05:22:37.845763 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fab1039a20ecb1e5eacab4f3d7679ca1387db6148ca1e2d47e8bd58a2efe19ee"} err="failed to get container status \"fab1039a20ecb1e5eacab4f3d7679ca1387db6148ca1e2d47e8bd58a2efe19ee\": rpc error: code = NotFound desc = could not find container \"fab1039a20ecb1e5eacab4f3d7679ca1387db6148ca1e2d47e8bd58a2efe19ee\": container with ID starting with fab1039a20ecb1e5eacab4f3d7679ca1387db6148ca1e2d47e8bd58a2efe19ee not found: ID does not exist" Feb 28 05:22:38 crc kubenswrapper[5014]: I0228 05:22:38.183286 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5e1adae-05bc-4292-bdee-d18bfe732b32" path="/var/lib/kubelet/pods/a5e1adae-05bc-4292-bdee-d18bfe732b32/volumes" Feb 28 05:22:58 crc kubenswrapper[5014]: I0228 05:22:58.356985 5014 scope.go:117] "RemoveContainer" containerID="7979b5a6bb8f92deada7f44e2da6a74c73d130949bb5486c480d7a716fa17d82" Feb 28 05:23:33 crc kubenswrapper[5014]: I0228 05:23:33.530709 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-699th"] Feb 28 05:23:33 crc kubenswrapper[5014]: E0228 05:23:33.531993 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5e1adae-05bc-4292-bdee-d18bfe732b32" containerName="registry-server" Feb 28 05:23:33 crc kubenswrapper[5014]: I0228 05:23:33.532015 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5e1adae-05bc-4292-bdee-d18bfe732b32" containerName="registry-server" Feb 28 05:23:33 crc kubenswrapper[5014]: E0228 05:23:33.532047 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5e1adae-05bc-4292-bdee-d18bfe732b32" containerName="extract-content" Feb 28 05:23:33 crc kubenswrapper[5014]: I0228 05:23:33.532058 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5e1adae-05bc-4292-bdee-d18bfe732b32" containerName="extract-content" Feb 28 05:23:33 crc kubenswrapper[5014]: E0228 05:23:33.532080 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5e1adae-05bc-4292-bdee-d18bfe732b32" containerName="extract-utilities" Feb 28 05:23:33 crc kubenswrapper[5014]: I0228 05:23:33.532091 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5e1adae-05bc-4292-bdee-d18bfe732b32" containerName="extract-utilities" Feb 28 05:23:33 crc kubenswrapper[5014]: I0228 05:23:33.532418 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5e1adae-05bc-4292-bdee-d18bfe732b32" containerName="registry-server" Feb 28 05:23:33 crc kubenswrapper[5014]: I0228 05:23:33.534582 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-699th" Feb 28 05:23:33 crc kubenswrapper[5014]: I0228 05:23:33.557081 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-699th"] Feb 28 05:23:33 crc kubenswrapper[5014]: I0228 05:23:33.658676 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42aed2c8-1035-42b1-92ed-626aeb29af57-utilities\") pod \"redhat-marketplace-699th\" (UID: \"42aed2c8-1035-42b1-92ed-626aeb29af57\") " pod="openshift-marketplace/redhat-marketplace-699th" Feb 28 05:23:33 crc kubenswrapper[5014]: I0228 05:23:33.658763 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42aed2c8-1035-42b1-92ed-626aeb29af57-catalog-content\") pod \"redhat-marketplace-699th\" (UID: \"42aed2c8-1035-42b1-92ed-626aeb29af57\") " pod="openshift-marketplace/redhat-marketplace-699th" Feb 28 05:23:33 crc kubenswrapper[5014]: I0228 05:23:33.658785 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mljtl\" (UniqueName: \"kubernetes.io/projected/42aed2c8-1035-42b1-92ed-626aeb29af57-kube-api-access-mljtl\") pod \"redhat-marketplace-699th\" (UID: \"42aed2c8-1035-42b1-92ed-626aeb29af57\") " pod="openshift-marketplace/redhat-marketplace-699th" Feb 28 05:23:33 crc kubenswrapper[5014]: I0228 05:23:33.760449 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42aed2c8-1035-42b1-92ed-626aeb29af57-utilities\") pod \"redhat-marketplace-699th\" (UID: \"42aed2c8-1035-42b1-92ed-626aeb29af57\") " pod="openshift-marketplace/redhat-marketplace-699th" Feb 28 05:23:33 crc kubenswrapper[5014]: I0228 05:23:33.760994 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42aed2c8-1035-42b1-92ed-626aeb29af57-utilities\") pod \"redhat-marketplace-699th\" (UID: \"42aed2c8-1035-42b1-92ed-626aeb29af57\") " pod="openshift-marketplace/redhat-marketplace-699th" Feb 28 05:23:33 crc kubenswrapper[5014]: I0228 05:23:33.762525 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42aed2c8-1035-42b1-92ed-626aeb29af57-catalog-content\") pod \"redhat-marketplace-699th\" (UID: \"42aed2c8-1035-42b1-92ed-626aeb29af57\") " pod="openshift-marketplace/redhat-marketplace-699th" Feb 28 05:23:33 crc kubenswrapper[5014]: I0228 05:23:33.762631 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mljtl\" (UniqueName: \"kubernetes.io/projected/42aed2c8-1035-42b1-92ed-626aeb29af57-kube-api-access-mljtl\") pod \"redhat-marketplace-699th\" (UID: \"42aed2c8-1035-42b1-92ed-626aeb29af57\") " pod="openshift-marketplace/redhat-marketplace-699th" Feb 28 05:23:33 crc kubenswrapper[5014]: I0228 05:23:33.763083 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42aed2c8-1035-42b1-92ed-626aeb29af57-catalog-content\") pod \"redhat-marketplace-699th\" (UID: \"42aed2c8-1035-42b1-92ed-626aeb29af57\") " pod="openshift-marketplace/redhat-marketplace-699th" Feb 28 05:23:33 crc kubenswrapper[5014]: I0228 05:23:33.797353 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mljtl\" (UniqueName: \"kubernetes.io/projected/42aed2c8-1035-42b1-92ed-626aeb29af57-kube-api-access-mljtl\") pod \"redhat-marketplace-699th\" (UID: \"42aed2c8-1035-42b1-92ed-626aeb29af57\") " pod="openshift-marketplace/redhat-marketplace-699th" Feb 28 05:23:33 crc kubenswrapper[5014]: I0228 05:23:33.885197 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-699th" Feb 28 05:23:34 crc kubenswrapper[5014]: I0228 05:23:34.530913 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-699th"] Feb 28 05:23:35 crc kubenswrapper[5014]: I0228 05:23:35.289071 5014 generic.go:334] "Generic (PLEG): container finished" podID="42aed2c8-1035-42b1-92ed-626aeb29af57" containerID="902d34c222f9ee0db9a376ad00081411be5ff5b96eaa2d684e10b313eb9d64e6" exitCode=0 Feb 28 05:23:35 crc kubenswrapper[5014]: I0228 05:23:35.289184 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-699th" event={"ID":"42aed2c8-1035-42b1-92ed-626aeb29af57","Type":"ContainerDied","Data":"902d34c222f9ee0db9a376ad00081411be5ff5b96eaa2d684e10b313eb9d64e6"} Feb 28 05:23:35 crc kubenswrapper[5014]: I0228 05:23:35.289604 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-699th" event={"ID":"42aed2c8-1035-42b1-92ed-626aeb29af57","Type":"ContainerStarted","Data":"06f95d071b986123783421ffeea67214e38ee7cc2d1e6a3e007a8a31325c5d8e"} Feb 28 05:23:35 crc kubenswrapper[5014]: I0228 05:23:35.291774 5014 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 05:23:36 crc kubenswrapper[5014]: I0228 05:23:36.303625 5014 generic.go:334] "Generic (PLEG): container finished" podID="42aed2c8-1035-42b1-92ed-626aeb29af57" containerID="3401baba33aa1b9e6dba162f7c1e1e0d215193e97a08813fde4d2ffcebe9db6a" exitCode=0 Feb 28 05:23:36 crc kubenswrapper[5014]: I0228 05:23:36.303769 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-699th" event={"ID":"42aed2c8-1035-42b1-92ed-626aeb29af57","Type":"ContainerDied","Data":"3401baba33aa1b9e6dba162f7c1e1e0d215193e97a08813fde4d2ffcebe9db6a"} Feb 28 05:23:37 crc kubenswrapper[5014]: I0228 05:23:37.313551 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-699th" event={"ID":"42aed2c8-1035-42b1-92ed-626aeb29af57","Type":"ContainerStarted","Data":"e027aeb3fdd8a17367ad430000d279bda70ad9646cae87ba2e9328c4f7e22180"} Feb 28 05:23:37 crc kubenswrapper[5014]: I0228 05:23:37.337594 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-699th" podStartSLOduration=2.724477012 podStartE2EDuration="4.337571261s" podCreationTimestamp="2026-02-28 05:23:33 +0000 UTC" firstStartedPulling="2026-02-28 05:23:35.291555356 +0000 UTC m=+3003.961681256" lastFinishedPulling="2026-02-28 05:23:36.904649595 +0000 UTC m=+3005.574775505" observedRunningTime="2026-02-28 05:23:37.329739678 +0000 UTC m=+3005.999865588" watchObservedRunningTime="2026-02-28 05:23:37.337571261 +0000 UTC m=+3006.007697171" Feb 28 05:23:43 crc kubenswrapper[5014]: I0228 05:23:43.891068 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-699th" Feb 28 05:23:43 crc kubenswrapper[5014]: I0228 05:23:43.891565 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-699th" Feb 28 05:23:43 crc kubenswrapper[5014]: I0228 05:23:43.940928 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-699th" Feb 28 05:23:44 crc kubenswrapper[5014]: I0228 05:23:44.438417 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-699th" Feb 28 05:23:44 crc kubenswrapper[5014]: I0228 05:23:44.490188 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-699th"] Feb 28 05:23:46 crc kubenswrapper[5014]: I0228 05:23:46.392483 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-699th" podUID="42aed2c8-1035-42b1-92ed-626aeb29af57" containerName="registry-server" containerID="cri-o://e027aeb3fdd8a17367ad430000d279bda70ad9646cae87ba2e9328c4f7e22180" gracePeriod=2 Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.387190 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-699th" Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.405291 5014 generic.go:334] "Generic (PLEG): container finished" podID="42aed2c8-1035-42b1-92ed-626aeb29af57" containerID="e027aeb3fdd8a17367ad430000d279bda70ad9646cae87ba2e9328c4f7e22180" exitCode=0 Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.405325 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-699th" event={"ID":"42aed2c8-1035-42b1-92ed-626aeb29af57","Type":"ContainerDied","Data":"e027aeb3fdd8a17367ad430000d279bda70ad9646cae87ba2e9328c4f7e22180"} Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.405349 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-699th" event={"ID":"42aed2c8-1035-42b1-92ed-626aeb29af57","Type":"ContainerDied","Data":"06f95d071b986123783421ffeea67214e38ee7cc2d1e6a3e007a8a31325c5d8e"} Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.405364 5014 scope.go:117] "RemoveContainer" containerID="e027aeb3fdd8a17367ad430000d279bda70ad9646cae87ba2e9328c4f7e22180" Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.405479 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-699th" Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.432343 5014 scope.go:117] "RemoveContainer" containerID="3401baba33aa1b9e6dba162f7c1e1e0d215193e97a08813fde4d2ffcebe9db6a" Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.455078 5014 scope.go:117] "RemoveContainer" containerID="902d34c222f9ee0db9a376ad00081411be5ff5b96eaa2d684e10b313eb9d64e6" Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.497254 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42aed2c8-1035-42b1-92ed-626aeb29af57-utilities\") pod \"42aed2c8-1035-42b1-92ed-626aeb29af57\" (UID: \"42aed2c8-1035-42b1-92ed-626aeb29af57\") " Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.497415 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mljtl\" (UniqueName: \"kubernetes.io/projected/42aed2c8-1035-42b1-92ed-626aeb29af57-kube-api-access-mljtl\") pod \"42aed2c8-1035-42b1-92ed-626aeb29af57\" (UID: \"42aed2c8-1035-42b1-92ed-626aeb29af57\") " Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.497470 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42aed2c8-1035-42b1-92ed-626aeb29af57-catalog-content\") pod \"42aed2c8-1035-42b1-92ed-626aeb29af57\" (UID: \"42aed2c8-1035-42b1-92ed-626aeb29af57\") " Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.498610 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42aed2c8-1035-42b1-92ed-626aeb29af57-utilities" (OuterVolumeSpecName: "utilities") pod "42aed2c8-1035-42b1-92ed-626aeb29af57" (UID: "42aed2c8-1035-42b1-92ed-626aeb29af57"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.502149 5014 scope.go:117] "RemoveContainer" containerID="e027aeb3fdd8a17367ad430000d279bda70ad9646cae87ba2e9328c4f7e22180" Feb 28 05:23:47 crc kubenswrapper[5014]: E0228 05:23:47.502578 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e027aeb3fdd8a17367ad430000d279bda70ad9646cae87ba2e9328c4f7e22180\": container with ID starting with e027aeb3fdd8a17367ad430000d279bda70ad9646cae87ba2e9328c4f7e22180 not found: ID does not exist" containerID="e027aeb3fdd8a17367ad430000d279bda70ad9646cae87ba2e9328c4f7e22180" Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.502650 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e027aeb3fdd8a17367ad430000d279bda70ad9646cae87ba2e9328c4f7e22180"} err="failed to get container status \"e027aeb3fdd8a17367ad430000d279bda70ad9646cae87ba2e9328c4f7e22180\": rpc error: code = NotFound desc = could not find container \"e027aeb3fdd8a17367ad430000d279bda70ad9646cae87ba2e9328c4f7e22180\": container with ID starting with e027aeb3fdd8a17367ad430000d279bda70ad9646cae87ba2e9328c4f7e22180 not found: ID does not exist" Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.502676 5014 scope.go:117] "RemoveContainer" containerID="3401baba33aa1b9e6dba162f7c1e1e0d215193e97a08813fde4d2ffcebe9db6a" Feb 28 05:23:47 crc kubenswrapper[5014]: E0228 05:23:47.506958 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3401baba33aa1b9e6dba162f7c1e1e0d215193e97a08813fde4d2ffcebe9db6a\": container with ID starting with 3401baba33aa1b9e6dba162f7c1e1e0d215193e97a08813fde4d2ffcebe9db6a not found: ID does not exist" containerID="3401baba33aa1b9e6dba162f7c1e1e0d215193e97a08813fde4d2ffcebe9db6a" Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.506998 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42aed2c8-1035-42b1-92ed-626aeb29af57-kube-api-access-mljtl" (OuterVolumeSpecName: "kube-api-access-mljtl") pod "42aed2c8-1035-42b1-92ed-626aeb29af57" (UID: "42aed2c8-1035-42b1-92ed-626aeb29af57"). InnerVolumeSpecName "kube-api-access-mljtl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.507002 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3401baba33aa1b9e6dba162f7c1e1e0d215193e97a08813fde4d2ffcebe9db6a"} err="failed to get container status \"3401baba33aa1b9e6dba162f7c1e1e0d215193e97a08813fde4d2ffcebe9db6a\": rpc error: code = NotFound desc = could not find container \"3401baba33aa1b9e6dba162f7c1e1e0d215193e97a08813fde4d2ffcebe9db6a\": container with ID starting with 3401baba33aa1b9e6dba162f7c1e1e0d215193e97a08813fde4d2ffcebe9db6a not found: ID does not exist" Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.507033 5014 scope.go:117] "RemoveContainer" containerID="902d34c222f9ee0db9a376ad00081411be5ff5b96eaa2d684e10b313eb9d64e6" Feb 28 05:23:47 crc kubenswrapper[5014]: E0228 05:23:47.507589 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"902d34c222f9ee0db9a376ad00081411be5ff5b96eaa2d684e10b313eb9d64e6\": container with ID starting with 902d34c222f9ee0db9a376ad00081411be5ff5b96eaa2d684e10b313eb9d64e6 not found: ID does not exist" containerID="902d34c222f9ee0db9a376ad00081411be5ff5b96eaa2d684e10b313eb9d64e6" Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.507615 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"902d34c222f9ee0db9a376ad00081411be5ff5b96eaa2d684e10b313eb9d64e6"} err="failed to get container status \"902d34c222f9ee0db9a376ad00081411be5ff5b96eaa2d684e10b313eb9d64e6\": rpc error: code = NotFound desc = could not find container \"902d34c222f9ee0db9a376ad00081411be5ff5b96eaa2d684e10b313eb9d64e6\": container with ID starting with 902d34c222f9ee0db9a376ad00081411be5ff5b96eaa2d684e10b313eb9d64e6 not found: ID does not exist" Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.528372 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42aed2c8-1035-42b1-92ed-626aeb29af57-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42aed2c8-1035-42b1-92ed-626aeb29af57" (UID: "42aed2c8-1035-42b1-92ed-626aeb29af57"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.599607 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mljtl\" (UniqueName: \"kubernetes.io/projected/42aed2c8-1035-42b1-92ed-626aeb29af57-kube-api-access-mljtl\") on node \"crc\" DevicePath \"\"" Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.600089 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42aed2c8-1035-42b1-92ed-626aeb29af57-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.600107 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42aed2c8-1035-42b1-92ed-626aeb29af57-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.749422 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-699th"] Feb 28 05:23:47 crc kubenswrapper[5014]: I0228 05:23:47.760984 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-699th"] Feb 28 05:23:48 crc kubenswrapper[5014]: I0228 05:23:48.187086 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42aed2c8-1035-42b1-92ed-626aeb29af57" path="/var/lib/kubelet/pods/42aed2c8-1035-42b1-92ed-626aeb29af57/volumes" Feb 28 05:24:00 crc kubenswrapper[5014]: I0228 05:24:00.156996 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537604-fw4zg"] Feb 28 05:24:00 crc kubenswrapper[5014]: E0228 05:24:00.158381 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42aed2c8-1035-42b1-92ed-626aeb29af57" containerName="registry-server" Feb 28 05:24:00 crc kubenswrapper[5014]: I0228 05:24:00.158412 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="42aed2c8-1035-42b1-92ed-626aeb29af57" containerName="registry-server" Feb 28 05:24:00 crc kubenswrapper[5014]: E0228 05:24:00.158448 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42aed2c8-1035-42b1-92ed-626aeb29af57" containerName="extract-content" Feb 28 05:24:00 crc kubenswrapper[5014]: I0228 05:24:00.158464 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="42aed2c8-1035-42b1-92ed-626aeb29af57" containerName="extract-content" Feb 28 05:24:00 crc kubenswrapper[5014]: E0228 05:24:00.158503 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42aed2c8-1035-42b1-92ed-626aeb29af57" containerName="extract-utilities" Feb 28 05:24:00 crc kubenswrapper[5014]: I0228 05:24:00.158520 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="42aed2c8-1035-42b1-92ed-626aeb29af57" containerName="extract-utilities" Feb 28 05:24:00 crc kubenswrapper[5014]: I0228 05:24:00.158976 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="42aed2c8-1035-42b1-92ed-626aeb29af57" containerName="registry-server" Feb 28 05:24:00 crc kubenswrapper[5014]: I0228 05:24:00.160294 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537604-fw4zg" Feb 28 05:24:00 crc kubenswrapper[5014]: I0228 05:24:00.163643 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:24:00 crc kubenswrapper[5014]: I0228 05:24:00.163898 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:24:00 crc kubenswrapper[5014]: I0228 05:24:00.164707 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:24:00 crc kubenswrapper[5014]: I0228 05:24:00.167070 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537604-fw4zg"] Feb 28 05:24:00 crc kubenswrapper[5014]: I0228 05:24:00.263903 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz4bt\" (UniqueName: \"kubernetes.io/projected/54ef16f6-0d33-49bd-ac7d-b1c484fcf531-kube-api-access-zz4bt\") pod \"auto-csr-approver-29537604-fw4zg\" (UID: \"54ef16f6-0d33-49bd-ac7d-b1c484fcf531\") " pod="openshift-infra/auto-csr-approver-29537604-fw4zg" Feb 28 05:24:00 crc kubenswrapper[5014]: I0228 05:24:00.366493 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz4bt\" (UniqueName: \"kubernetes.io/projected/54ef16f6-0d33-49bd-ac7d-b1c484fcf531-kube-api-access-zz4bt\") pod \"auto-csr-approver-29537604-fw4zg\" (UID: \"54ef16f6-0d33-49bd-ac7d-b1c484fcf531\") " pod="openshift-infra/auto-csr-approver-29537604-fw4zg" Feb 28 05:24:00 crc kubenswrapper[5014]: I0228 05:24:00.400933 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz4bt\" (UniqueName: \"kubernetes.io/projected/54ef16f6-0d33-49bd-ac7d-b1c484fcf531-kube-api-access-zz4bt\") pod \"auto-csr-approver-29537604-fw4zg\" (UID: \"54ef16f6-0d33-49bd-ac7d-b1c484fcf531\") " pod="openshift-infra/auto-csr-approver-29537604-fw4zg" Feb 28 05:24:00 crc kubenswrapper[5014]: I0228 05:24:00.486149 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537604-fw4zg" Feb 28 05:24:00 crc kubenswrapper[5014]: W0228 05:24:00.954419 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54ef16f6_0d33_49bd_ac7d_b1c484fcf531.slice/crio-56dde88b703c5aab7616ce4778789d7ca249341c54a83e927f24381fc8ce0cca WatchSource:0}: Error finding container 56dde88b703c5aab7616ce4778789d7ca249341c54a83e927f24381fc8ce0cca: Status 404 returned error can't find the container with id 56dde88b703c5aab7616ce4778789d7ca249341c54a83e927f24381fc8ce0cca Feb 28 05:24:00 crc kubenswrapper[5014]: I0228 05:24:00.959596 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537604-fw4zg"] Feb 28 05:24:01 crc kubenswrapper[5014]: I0228 05:24:01.550977 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537604-fw4zg" event={"ID":"54ef16f6-0d33-49bd-ac7d-b1c484fcf531","Type":"ContainerStarted","Data":"56dde88b703c5aab7616ce4778789d7ca249341c54a83e927f24381fc8ce0cca"} Feb 28 05:24:02 crc kubenswrapper[5014]: I0228 05:24:02.562995 5014 generic.go:334] "Generic (PLEG): container finished" podID="54ef16f6-0d33-49bd-ac7d-b1c484fcf531" containerID="c2e1b73fc9b75769ddaee936d2429553e86351d655cea60bd40cf131b88e14f6" exitCode=0 Feb 28 05:24:02 crc kubenswrapper[5014]: I0228 05:24:02.563069 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537604-fw4zg" event={"ID":"54ef16f6-0d33-49bd-ac7d-b1c484fcf531","Type":"ContainerDied","Data":"c2e1b73fc9b75769ddaee936d2429553e86351d655cea60bd40cf131b88e14f6"} Feb 28 05:24:04 crc kubenswrapper[5014]: I0228 05:24:04.942928 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537604-fw4zg" Feb 28 05:24:04 crc kubenswrapper[5014]: I0228 05:24:04.970100 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537604-fw4zg" Feb 28 05:24:05 crc kubenswrapper[5014]: I0228 05:24:05.014459 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537604-fw4zg" event={"ID":"54ef16f6-0d33-49bd-ac7d-b1c484fcf531","Type":"ContainerDied","Data":"56dde88b703c5aab7616ce4778789d7ca249341c54a83e927f24381fc8ce0cca"} Feb 28 05:24:05 crc kubenswrapper[5014]: I0228 05:24:05.014500 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56dde88b703c5aab7616ce4778789d7ca249341c54a83e927f24381fc8ce0cca" Feb 28 05:24:05 crc kubenswrapper[5014]: I0228 05:24:05.058218 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz4bt\" (UniqueName: \"kubernetes.io/projected/54ef16f6-0d33-49bd-ac7d-b1c484fcf531-kube-api-access-zz4bt\") pod \"54ef16f6-0d33-49bd-ac7d-b1c484fcf531\" (UID: \"54ef16f6-0d33-49bd-ac7d-b1c484fcf531\") " Feb 28 05:24:05 crc kubenswrapper[5014]: I0228 05:24:05.068692 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54ef16f6-0d33-49bd-ac7d-b1c484fcf531-kube-api-access-zz4bt" (OuterVolumeSpecName: "kube-api-access-zz4bt") pod "54ef16f6-0d33-49bd-ac7d-b1c484fcf531" (UID: "54ef16f6-0d33-49bd-ac7d-b1c484fcf531"). InnerVolumeSpecName "kube-api-access-zz4bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:24:05 crc kubenswrapper[5014]: I0228 05:24:05.159959 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz4bt\" (UniqueName: \"kubernetes.io/projected/54ef16f6-0d33-49bd-ac7d-b1c484fcf531-kube-api-access-zz4bt\") on node \"crc\" DevicePath \"\"" Feb 28 05:24:06 crc kubenswrapper[5014]: I0228 05:24:06.034286 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537598-l8ms6"] Feb 28 05:24:06 crc kubenswrapper[5014]: I0228 05:24:06.041257 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537598-l8ms6"] Feb 28 05:24:06 crc kubenswrapper[5014]: I0228 05:24:06.182119 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdbd7594-dc62-4b56-a5f4-dc5c9d92cdd2" path="/var/lib/kubelet/pods/cdbd7594-dc62-4b56-a5f4-dc5c9d92cdd2/volumes" Feb 28 05:24:15 crc kubenswrapper[5014]: I0228 05:24:15.706502 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:24:15 crc kubenswrapper[5014]: I0228 05:24:15.707013 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:24:45 crc kubenswrapper[5014]: I0228 05:24:45.706477 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:24:45 crc kubenswrapper[5014]: I0228 05:24:45.707105 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:24:58 crc kubenswrapper[5014]: I0228 05:24:58.472605 5014 scope.go:117] "RemoveContainer" containerID="e67ba660f93190165df44c6a3ec26e545e5b7fef96190973e226469ac9db3a01" Feb 28 05:25:15 crc kubenswrapper[5014]: I0228 05:25:15.706704 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:25:15 crc kubenswrapper[5014]: I0228 05:25:15.707350 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:25:15 crc kubenswrapper[5014]: I0228 05:25:15.707401 5014 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 05:25:15 crc kubenswrapper[5014]: I0228 05:25:15.708226 5014 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55"} pod="openshift-machine-config-operator/machine-config-daemon-cct62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 05:25:15 crc kubenswrapper[5014]: I0228 05:25:15.708277 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" containerID="cri-o://192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" gracePeriod=600 Feb 28 05:25:15 crc kubenswrapper[5014]: E0228 05:25:15.858191 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:25:16 crc kubenswrapper[5014]: I0228 05:25:16.694794 5014 generic.go:334] "Generic (PLEG): container finished" podID="6aad0009-d904-48f8-8e30-82205907ece1" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" exitCode=0 Feb 28 05:25:16 crc kubenswrapper[5014]: I0228 05:25:16.694842 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerDied","Data":"192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55"} Feb 28 05:25:16 crc kubenswrapper[5014]: I0228 05:25:16.695225 5014 scope.go:117] "RemoveContainer" containerID="c456e5f386c62d2fe58c1ccf175f9bdaa457a2719c956898c0819998d2ac4b45" Feb 28 05:25:16 crc kubenswrapper[5014]: I0228 05:25:16.696358 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:25:16 crc kubenswrapper[5014]: E0228 05:25:16.696770 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:25:30 crc kubenswrapper[5014]: I0228 05:25:30.172625 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:25:30 crc kubenswrapper[5014]: E0228 05:25:30.173540 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:25:43 crc kubenswrapper[5014]: I0228 05:25:43.172263 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:25:43 crc kubenswrapper[5014]: E0228 05:25:43.173478 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:25:54 crc kubenswrapper[5014]: I0228 05:25:54.172859 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:25:54 crc kubenswrapper[5014]: E0228 05:25:54.173661 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:26:00 crc kubenswrapper[5014]: I0228 05:26:00.198312 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537606-9stjj"] Feb 28 05:26:00 crc kubenswrapper[5014]: E0228 05:26:00.199681 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54ef16f6-0d33-49bd-ac7d-b1c484fcf531" containerName="oc" Feb 28 05:26:00 crc kubenswrapper[5014]: I0228 05:26:00.199699 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="54ef16f6-0d33-49bd-ac7d-b1c484fcf531" containerName="oc" Feb 28 05:26:00 crc kubenswrapper[5014]: I0228 05:26:00.200025 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="54ef16f6-0d33-49bd-ac7d-b1c484fcf531" containerName="oc" Feb 28 05:26:00 crc kubenswrapper[5014]: I0228 05:26:00.200940 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537606-9stjj"] Feb 28 05:26:00 crc kubenswrapper[5014]: I0228 05:26:00.201050 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537606-9stjj" Feb 28 05:26:00 crc kubenswrapper[5014]: I0228 05:26:00.203560 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:26:00 crc kubenswrapper[5014]: I0228 05:26:00.203975 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:26:00 crc kubenswrapper[5014]: I0228 05:26:00.203991 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:26:00 crc kubenswrapper[5014]: I0228 05:26:00.354871 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktgjt\" (UniqueName: \"kubernetes.io/projected/44d4ce55-e715-43aa-bc81-b8ae78482fb3-kube-api-access-ktgjt\") pod \"auto-csr-approver-29537606-9stjj\" (UID: \"44d4ce55-e715-43aa-bc81-b8ae78482fb3\") " pod="openshift-infra/auto-csr-approver-29537606-9stjj" Feb 28 05:26:00 crc kubenswrapper[5014]: I0228 05:26:00.456676 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktgjt\" (UniqueName: \"kubernetes.io/projected/44d4ce55-e715-43aa-bc81-b8ae78482fb3-kube-api-access-ktgjt\") pod \"auto-csr-approver-29537606-9stjj\" (UID: \"44d4ce55-e715-43aa-bc81-b8ae78482fb3\") " pod="openshift-infra/auto-csr-approver-29537606-9stjj" Feb 28 05:26:00 crc kubenswrapper[5014]: I0228 05:26:00.478229 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktgjt\" (UniqueName: \"kubernetes.io/projected/44d4ce55-e715-43aa-bc81-b8ae78482fb3-kube-api-access-ktgjt\") pod \"auto-csr-approver-29537606-9stjj\" (UID: \"44d4ce55-e715-43aa-bc81-b8ae78482fb3\") " pod="openshift-infra/auto-csr-approver-29537606-9stjj" Feb 28 05:26:00 crc kubenswrapper[5014]: I0228 05:26:00.530862 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537606-9stjj" Feb 28 05:26:01 crc kubenswrapper[5014]: I0228 05:26:01.012317 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537606-9stjj"] Feb 28 05:26:01 crc kubenswrapper[5014]: W0228 05:26:01.018276 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44d4ce55_e715_43aa_bc81_b8ae78482fb3.slice/crio-06646295107b1aaa6414f646c3c09a703465fb3b7d947fda350a725e617259ff WatchSource:0}: Error finding container 06646295107b1aaa6414f646c3c09a703465fb3b7d947fda350a725e617259ff: Status 404 returned error can't find the container with id 06646295107b1aaa6414f646c3c09a703465fb3b7d947fda350a725e617259ff Feb 28 05:26:01 crc kubenswrapper[5014]: I0228 05:26:01.136646 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537606-9stjj" event={"ID":"44d4ce55-e715-43aa-bc81-b8ae78482fb3","Type":"ContainerStarted","Data":"06646295107b1aaa6414f646c3c09a703465fb3b7d947fda350a725e617259ff"} Feb 28 05:26:03 crc kubenswrapper[5014]: I0228 05:26:03.166724 5014 generic.go:334] "Generic (PLEG): container finished" podID="44d4ce55-e715-43aa-bc81-b8ae78482fb3" containerID="ebf60cc84e8d3bf9a622f3a61a1c04ceb0413fe7c4cb0d63e48c36047eaaae35" exitCode=0 Feb 28 05:26:03 crc kubenswrapper[5014]: I0228 05:26:03.166829 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537606-9stjj" event={"ID":"44d4ce55-e715-43aa-bc81-b8ae78482fb3","Type":"ContainerDied","Data":"ebf60cc84e8d3bf9a622f3a61a1c04ceb0413fe7c4cb0d63e48c36047eaaae35"} Feb 28 05:26:04 crc kubenswrapper[5014]: I0228 05:26:04.569515 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537606-9stjj" Feb 28 05:26:04 crc kubenswrapper[5014]: I0228 05:26:04.735916 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktgjt\" (UniqueName: \"kubernetes.io/projected/44d4ce55-e715-43aa-bc81-b8ae78482fb3-kube-api-access-ktgjt\") pod \"44d4ce55-e715-43aa-bc81-b8ae78482fb3\" (UID: \"44d4ce55-e715-43aa-bc81-b8ae78482fb3\") " Feb 28 05:26:04 crc kubenswrapper[5014]: I0228 05:26:04.743535 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44d4ce55-e715-43aa-bc81-b8ae78482fb3-kube-api-access-ktgjt" (OuterVolumeSpecName: "kube-api-access-ktgjt") pod "44d4ce55-e715-43aa-bc81-b8ae78482fb3" (UID: "44d4ce55-e715-43aa-bc81-b8ae78482fb3"). InnerVolumeSpecName "kube-api-access-ktgjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:26:04 crc kubenswrapper[5014]: I0228 05:26:04.838408 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktgjt\" (UniqueName: \"kubernetes.io/projected/44d4ce55-e715-43aa-bc81-b8ae78482fb3-kube-api-access-ktgjt\") on node \"crc\" DevicePath \"\"" Feb 28 05:26:05 crc kubenswrapper[5014]: I0228 05:26:05.188406 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537606-9stjj" event={"ID":"44d4ce55-e715-43aa-bc81-b8ae78482fb3","Type":"ContainerDied","Data":"06646295107b1aaa6414f646c3c09a703465fb3b7d947fda350a725e617259ff"} Feb 28 05:26:05 crc kubenswrapper[5014]: I0228 05:26:05.188742 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06646295107b1aaa6414f646c3c09a703465fb3b7d947fda350a725e617259ff" Feb 28 05:26:05 crc kubenswrapper[5014]: I0228 05:26:05.188560 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537606-9stjj" Feb 28 05:26:05 crc kubenswrapper[5014]: I0228 05:26:05.642667 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537600-w7ql9"] Feb 28 05:26:05 crc kubenswrapper[5014]: I0228 05:26:05.650150 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537600-w7ql9"] Feb 28 05:26:06 crc kubenswrapper[5014]: I0228 05:26:06.172329 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:26:06 crc kubenswrapper[5014]: E0228 05:26:06.172594 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:26:06 crc kubenswrapper[5014]: I0228 05:26:06.184513 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d030132-c1a7-4e6a-97d5-c73a8505d92e" path="/var/lib/kubelet/pods/4d030132-c1a7-4e6a-97d5-c73a8505d92e/volumes" Feb 28 05:26:19 crc kubenswrapper[5014]: I0228 05:26:19.172156 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:26:19 crc kubenswrapper[5014]: E0228 05:26:19.172917 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:26:30 crc kubenswrapper[5014]: I0228 05:26:30.173133 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:26:30 crc kubenswrapper[5014]: E0228 05:26:30.173793 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:26:42 crc kubenswrapper[5014]: I0228 05:26:42.192508 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:26:42 crc kubenswrapper[5014]: E0228 05:26:42.193298 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:26:56 crc kubenswrapper[5014]: I0228 05:26:56.173048 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:26:56 crc kubenswrapper[5014]: E0228 05:26:56.173924 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:26:58 crc kubenswrapper[5014]: I0228 05:26:58.618395 5014 scope.go:117] "RemoveContainer" containerID="d6cbc90e4677bef25de47024655fb2510a741251ea2bce7309d212e532c9ff9e" Feb 28 05:27:02 crc kubenswrapper[5014]: I0228 05:27:02.082348 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qlh52"] Feb 28 05:27:02 crc kubenswrapper[5014]: E0228 05:27:02.083079 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44d4ce55-e715-43aa-bc81-b8ae78482fb3" containerName="oc" Feb 28 05:27:02 crc kubenswrapper[5014]: I0228 05:27:02.083094 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="44d4ce55-e715-43aa-bc81-b8ae78482fb3" containerName="oc" Feb 28 05:27:02 crc kubenswrapper[5014]: I0228 05:27:02.083273 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="44d4ce55-e715-43aa-bc81-b8ae78482fb3" containerName="oc" Feb 28 05:27:02 crc kubenswrapper[5014]: I0228 05:27:02.086414 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qlh52" Feb 28 05:27:02 crc kubenswrapper[5014]: I0228 05:27:02.092817 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qlh52"] Feb 28 05:27:02 crc kubenswrapper[5014]: I0228 05:27:02.184853 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e05f68aa-af01-41ba-8a22-d00ebc65ff38-utilities\") pod \"redhat-operators-qlh52\" (UID: \"e05f68aa-af01-41ba-8a22-d00ebc65ff38\") " pod="openshift-marketplace/redhat-operators-qlh52" Feb 28 05:27:02 crc kubenswrapper[5014]: I0228 05:27:02.184921 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5m4g\" (UniqueName: \"kubernetes.io/projected/e05f68aa-af01-41ba-8a22-d00ebc65ff38-kube-api-access-t5m4g\") pod \"redhat-operators-qlh52\" (UID: \"e05f68aa-af01-41ba-8a22-d00ebc65ff38\") " pod="openshift-marketplace/redhat-operators-qlh52" Feb 28 05:27:02 crc kubenswrapper[5014]: I0228 05:27:02.184986 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e05f68aa-af01-41ba-8a22-d00ebc65ff38-catalog-content\") pod \"redhat-operators-qlh52\" (UID: \"e05f68aa-af01-41ba-8a22-d00ebc65ff38\") " pod="openshift-marketplace/redhat-operators-qlh52" Feb 28 05:27:02 crc kubenswrapper[5014]: I0228 05:27:02.286387 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e05f68aa-af01-41ba-8a22-d00ebc65ff38-utilities\") pod \"redhat-operators-qlh52\" (UID: \"e05f68aa-af01-41ba-8a22-d00ebc65ff38\") " pod="openshift-marketplace/redhat-operators-qlh52" Feb 28 05:27:02 crc kubenswrapper[5014]: I0228 05:27:02.286481 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5m4g\" (UniqueName: \"kubernetes.io/projected/e05f68aa-af01-41ba-8a22-d00ebc65ff38-kube-api-access-t5m4g\") pod \"redhat-operators-qlh52\" (UID: \"e05f68aa-af01-41ba-8a22-d00ebc65ff38\") " pod="openshift-marketplace/redhat-operators-qlh52" Feb 28 05:27:02 crc kubenswrapper[5014]: I0228 05:27:02.286585 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e05f68aa-af01-41ba-8a22-d00ebc65ff38-catalog-content\") pod \"redhat-operators-qlh52\" (UID: \"e05f68aa-af01-41ba-8a22-d00ebc65ff38\") " pod="openshift-marketplace/redhat-operators-qlh52" Feb 28 05:27:02 crc kubenswrapper[5014]: I0228 05:27:02.286996 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e05f68aa-af01-41ba-8a22-d00ebc65ff38-utilities\") pod \"redhat-operators-qlh52\" (UID: \"e05f68aa-af01-41ba-8a22-d00ebc65ff38\") " pod="openshift-marketplace/redhat-operators-qlh52" Feb 28 05:27:02 crc kubenswrapper[5014]: I0228 05:27:02.287111 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e05f68aa-af01-41ba-8a22-d00ebc65ff38-catalog-content\") pod \"redhat-operators-qlh52\" (UID: \"e05f68aa-af01-41ba-8a22-d00ebc65ff38\") " pod="openshift-marketplace/redhat-operators-qlh52" Feb 28 05:27:02 crc kubenswrapper[5014]: I0228 05:27:02.311064 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5m4g\" (UniqueName: \"kubernetes.io/projected/e05f68aa-af01-41ba-8a22-d00ebc65ff38-kube-api-access-t5m4g\") pod \"redhat-operators-qlh52\" (UID: \"e05f68aa-af01-41ba-8a22-d00ebc65ff38\") " pod="openshift-marketplace/redhat-operators-qlh52" Feb 28 05:27:02 crc kubenswrapper[5014]: I0228 05:27:02.412746 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qlh52" Feb 28 05:27:02 crc kubenswrapper[5014]: I0228 05:27:02.967283 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qlh52"] Feb 28 05:27:03 crc kubenswrapper[5014]: I0228 05:27:03.761651 5014 generic.go:334] "Generic (PLEG): container finished" podID="e05f68aa-af01-41ba-8a22-d00ebc65ff38" containerID="dd5837511a6d7e2354f98d4c41d9e5e855ee5f317414e6d07d52a806fe4d7727" exitCode=0 Feb 28 05:27:03 crc kubenswrapper[5014]: I0228 05:27:03.761735 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qlh52" event={"ID":"e05f68aa-af01-41ba-8a22-d00ebc65ff38","Type":"ContainerDied","Data":"dd5837511a6d7e2354f98d4c41d9e5e855ee5f317414e6d07d52a806fe4d7727"} Feb 28 05:27:03 crc kubenswrapper[5014]: I0228 05:27:03.762136 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qlh52" event={"ID":"e05f68aa-af01-41ba-8a22-d00ebc65ff38","Type":"ContainerStarted","Data":"49fb3ad6911e9f9aafb6a03b75b20a4565abc9f82aabefa86d5e40ab2be628e7"} Feb 28 05:27:07 crc kubenswrapper[5014]: I0228 05:27:07.834826 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qlh52" event={"ID":"e05f68aa-af01-41ba-8a22-d00ebc65ff38","Type":"ContainerStarted","Data":"aa36e9bb1259598d46493ffa501808fd8444c03612e362f3d3e10cc156c3120f"} Feb 28 05:27:09 crc kubenswrapper[5014]: I0228 05:27:09.855089 5014 generic.go:334] "Generic (PLEG): container finished" podID="e05f68aa-af01-41ba-8a22-d00ebc65ff38" containerID="aa36e9bb1259598d46493ffa501808fd8444c03612e362f3d3e10cc156c3120f" exitCode=0 Feb 28 05:27:09 crc kubenswrapper[5014]: I0228 05:27:09.855175 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qlh52" event={"ID":"e05f68aa-af01-41ba-8a22-d00ebc65ff38","Type":"ContainerDied","Data":"aa36e9bb1259598d46493ffa501808fd8444c03612e362f3d3e10cc156c3120f"} Feb 28 05:27:10 crc kubenswrapper[5014]: I0228 05:27:10.172569 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:27:10 crc kubenswrapper[5014]: E0228 05:27:10.173159 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:27:11 crc kubenswrapper[5014]: I0228 05:27:11.873335 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qlh52" event={"ID":"e05f68aa-af01-41ba-8a22-d00ebc65ff38","Type":"ContainerStarted","Data":"e389e8ba56bc140affb8528ddad0e70c5f2dbbcd503584d7a7392c65c59e664d"} Feb 28 05:27:11 crc kubenswrapper[5014]: I0228 05:27:11.899545 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qlh52" podStartSLOduration=2.279931989 podStartE2EDuration="9.899521215s" podCreationTimestamp="2026-02-28 05:27:02 +0000 UTC" firstStartedPulling="2026-02-28 05:27:03.764753905 +0000 UTC m=+3212.434879815" lastFinishedPulling="2026-02-28 05:27:11.384343131 +0000 UTC m=+3220.054469041" observedRunningTime="2026-02-28 05:27:11.890047948 +0000 UTC m=+3220.560173848" watchObservedRunningTime="2026-02-28 05:27:11.899521215 +0000 UTC m=+3220.569647125" Feb 28 05:27:12 crc kubenswrapper[5014]: I0228 05:27:12.413174 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qlh52" Feb 28 05:27:12 crc kubenswrapper[5014]: I0228 05:27:12.413470 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qlh52" Feb 28 05:27:13 crc kubenswrapper[5014]: I0228 05:27:13.464165 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qlh52" podUID="e05f68aa-af01-41ba-8a22-d00ebc65ff38" containerName="registry-server" probeResult="failure" output=< Feb 28 05:27:13 crc kubenswrapper[5014]: timeout: failed to connect service ":50051" within 1s Feb 28 05:27:13 crc kubenswrapper[5014]: > Feb 28 05:27:22 crc kubenswrapper[5014]: I0228 05:27:22.474547 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qlh52" Feb 28 05:27:22 crc kubenswrapper[5014]: I0228 05:27:22.535677 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qlh52" Feb 28 05:27:22 crc kubenswrapper[5014]: I0228 05:27:22.718665 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qlh52"] Feb 28 05:27:23 crc kubenswrapper[5014]: I0228 05:27:23.972822 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qlh52" podUID="e05f68aa-af01-41ba-8a22-d00ebc65ff38" containerName="registry-server" containerID="cri-o://e389e8ba56bc140affb8528ddad0e70c5f2dbbcd503584d7a7392c65c59e664d" gracePeriod=2 Feb 28 05:27:24 crc kubenswrapper[5014]: I0228 05:27:24.612643 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qlh52" Feb 28 05:27:24 crc kubenswrapper[5014]: I0228 05:27:24.778972 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5m4g\" (UniqueName: \"kubernetes.io/projected/e05f68aa-af01-41ba-8a22-d00ebc65ff38-kube-api-access-t5m4g\") pod \"e05f68aa-af01-41ba-8a22-d00ebc65ff38\" (UID: \"e05f68aa-af01-41ba-8a22-d00ebc65ff38\") " Feb 28 05:27:24 crc kubenswrapper[5014]: I0228 05:27:24.779110 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e05f68aa-af01-41ba-8a22-d00ebc65ff38-catalog-content\") pod \"e05f68aa-af01-41ba-8a22-d00ebc65ff38\" (UID: \"e05f68aa-af01-41ba-8a22-d00ebc65ff38\") " Feb 28 05:27:24 crc kubenswrapper[5014]: I0228 05:27:24.779174 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e05f68aa-af01-41ba-8a22-d00ebc65ff38-utilities\") pod \"e05f68aa-af01-41ba-8a22-d00ebc65ff38\" (UID: \"e05f68aa-af01-41ba-8a22-d00ebc65ff38\") " Feb 28 05:27:24 crc kubenswrapper[5014]: I0228 05:27:24.780077 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e05f68aa-af01-41ba-8a22-d00ebc65ff38-utilities" (OuterVolumeSpecName: "utilities") pod "e05f68aa-af01-41ba-8a22-d00ebc65ff38" (UID: "e05f68aa-af01-41ba-8a22-d00ebc65ff38"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:27:24 crc kubenswrapper[5014]: I0228 05:27:24.787099 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e05f68aa-af01-41ba-8a22-d00ebc65ff38-kube-api-access-t5m4g" (OuterVolumeSpecName: "kube-api-access-t5m4g") pod "e05f68aa-af01-41ba-8a22-d00ebc65ff38" (UID: "e05f68aa-af01-41ba-8a22-d00ebc65ff38"). InnerVolumeSpecName "kube-api-access-t5m4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:27:24 crc kubenswrapper[5014]: I0228 05:27:24.881237 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5m4g\" (UniqueName: \"kubernetes.io/projected/e05f68aa-af01-41ba-8a22-d00ebc65ff38-kube-api-access-t5m4g\") on node \"crc\" DevicePath \"\"" Feb 28 05:27:24 crc kubenswrapper[5014]: I0228 05:27:24.881786 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e05f68aa-af01-41ba-8a22-d00ebc65ff38-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 05:27:24 crc kubenswrapper[5014]: I0228 05:27:24.913861 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e05f68aa-af01-41ba-8a22-d00ebc65ff38-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e05f68aa-af01-41ba-8a22-d00ebc65ff38" (UID: "e05f68aa-af01-41ba-8a22-d00ebc65ff38"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:27:24 crc kubenswrapper[5014]: I0228 05:27:24.983109 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e05f68aa-af01-41ba-8a22-d00ebc65ff38-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 05:27:24 crc kubenswrapper[5014]: I0228 05:27:24.984293 5014 generic.go:334] "Generic (PLEG): container finished" podID="e05f68aa-af01-41ba-8a22-d00ebc65ff38" containerID="e389e8ba56bc140affb8528ddad0e70c5f2dbbcd503584d7a7392c65c59e664d" exitCode=0 Feb 28 05:27:24 crc kubenswrapper[5014]: I0228 05:27:24.984344 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qlh52" event={"ID":"e05f68aa-af01-41ba-8a22-d00ebc65ff38","Type":"ContainerDied","Data":"e389e8ba56bc140affb8528ddad0e70c5f2dbbcd503584d7a7392c65c59e664d"} Feb 28 05:27:24 crc kubenswrapper[5014]: I0228 05:27:24.984374 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qlh52" Feb 28 05:27:24 crc kubenswrapper[5014]: I0228 05:27:24.984402 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qlh52" event={"ID":"e05f68aa-af01-41ba-8a22-d00ebc65ff38","Type":"ContainerDied","Data":"49fb3ad6911e9f9aafb6a03b75b20a4565abc9f82aabefa86d5e40ab2be628e7"} Feb 28 05:27:24 crc kubenswrapper[5014]: I0228 05:27:24.984438 5014 scope.go:117] "RemoveContainer" containerID="e389e8ba56bc140affb8528ddad0e70c5f2dbbcd503584d7a7392c65c59e664d" Feb 28 05:27:25 crc kubenswrapper[5014]: I0228 05:27:25.020203 5014 scope.go:117] "RemoveContainer" containerID="aa36e9bb1259598d46493ffa501808fd8444c03612e362f3d3e10cc156c3120f" Feb 28 05:27:25 crc kubenswrapper[5014]: I0228 05:27:25.041483 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qlh52"] Feb 28 05:27:25 crc kubenswrapper[5014]: I0228 05:27:25.048611 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qlh52"] Feb 28 05:27:25 crc kubenswrapper[5014]: I0228 05:27:25.062400 5014 scope.go:117] "RemoveContainer" containerID="dd5837511a6d7e2354f98d4c41d9e5e855ee5f317414e6d07d52a806fe4d7727" Feb 28 05:27:25 crc kubenswrapper[5014]: I0228 05:27:25.098277 5014 scope.go:117] "RemoveContainer" containerID="e389e8ba56bc140affb8528ddad0e70c5f2dbbcd503584d7a7392c65c59e664d" Feb 28 05:27:25 crc kubenswrapper[5014]: E0228 05:27:25.100092 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e389e8ba56bc140affb8528ddad0e70c5f2dbbcd503584d7a7392c65c59e664d\": container with ID starting with e389e8ba56bc140affb8528ddad0e70c5f2dbbcd503584d7a7392c65c59e664d not found: ID does not exist" containerID="e389e8ba56bc140affb8528ddad0e70c5f2dbbcd503584d7a7392c65c59e664d" Feb 28 05:27:25 crc kubenswrapper[5014]: I0228 05:27:25.100123 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e389e8ba56bc140affb8528ddad0e70c5f2dbbcd503584d7a7392c65c59e664d"} err="failed to get container status \"e389e8ba56bc140affb8528ddad0e70c5f2dbbcd503584d7a7392c65c59e664d\": rpc error: code = NotFound desc = could not find container \"e389e8ba56bc140affb8528ddad0e70c5f2dbbcd503584d7a7392c65c59e664d\": container with ID starting with e389e8ba56bc140affb8528ddad0e70c5f2dbbcd503584d7a7392c65c59e664d not found: ID does not exist" Feb 28 05:27:25 crc kubenswrapper[5014]: I0228 05:27:25.100143 5014 scope.go:117] "RemoveContainer" containerID="aa36e9bb1259598d46493ffa501808fd8444c03612e362f3d3e10cc156c3120f" Feb 28 05:27:25 crc kubenswrapper[5014]: E0228 05:27:25.100439 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa36e9bb1259598d46493ffa501808fd8444c03612e362f3d3e10cc156c3120f\": container with ID starting with aa36e9bb1259598d46493ffa501808fd8444c03612e362f3d3e10cc156c3120f not found: ID does not exist" containerID="aa36e9bb1259598d46493ffa501808fd8444c03612e362f3d3e10cc156c3120f" Feb 28 05:27:25 crc kubenswrapper[5014]: I0228 05:27:25.100459 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa36e9bb1259598d46493ffa501808fd8444c03612e362f3d3e10cc156c3120f"} err="failed to get container status \"aa36e9bb1259598d46493ffa501808fd8444c03612e362f3d3e10cc156c3120f\": rpc error: code = NotFound desc = could not find container \"aa36e9bb1259598d46493ffa501808fd8444c03612e362f3d3e10cc156c3120f\": container with ID starting with aa36e9bb1259598d46493ffa501808fd8444c03612e362f3d3e10cc156c3120f not found: ID does not exist" Feb 28 05:27:25 crc kubenswrapper[5014]: I0228 05:27:25.100489 5014 scope.go:117] "RemoveContainer" containerID="dd5837511a6d7e2354f98d4c41d9e5e855ee5f317414e6d07d52a806fe4d7727" Feb 28 05:27:25 crc kubenswrapper[5014]: E0228 05:27:25.100941 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd5837511a6d7e2354f98d4c41d9e5e855ee5f317414e6d07d52a806fe4d7727\": container with ID starting with dd5837511a6d7e2354f98d4c41d9e5e855ee5f317414e6d07d52a806fe4d7727 not found: ID does not exist" containerID="dd5837511a6d7e2354f98d4c41d9e5e855ee5f317414e6d07d52a806fe4d7727" Feb 28 05:27:25 crc kubenswrapper[5014]: I0228 05:27:25.100977 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd5837511a6d7e2354f98d4c41d9e5e855ee5f317414e6d07d52a806fe4d7727"} err="failed to get container status \"dd5837511a6d7e2354f98d4c41d9e5e855ee5f317414e6d07d52a806fe4d7727\": rpc error: code = NotFound desc = could not find container \"dd5837511a6d7e2354f98d4c41d9e5e855ee5f317414e6d07d52a806fe4d7727\": container with ID starting with dd5837511a6d7e2354f98d4c41d9e5e855ee5f317414e6d07d52a806fe4d7727 not found: ID does not exist" Feb 28 05:27:25 crc kubenswrapper[5014]: I0228 05:27:25.173180 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:27:25 crc kubenswrapper[5014]: E0228 05:27:25.173385 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:27:26 crc kubenswrapper[5014]: I0228 05:27:26.181324 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e05f68aa-af01-41ba-8a22-d00ebc65ff38" path="/var/lib/kubelet/pods/e05f68aa-af01-41ba-8a22-d00ebc65ff38/volumes" Feb 28 05:27:37 crc kubenswrapper[5014]: I0228 05:27:37.171781 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:27:37 crc kubenswrapper[5014]: E0228 05:27:37.172838 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:27:51 crc kubenswrapper[5014]: I0228 05:27:51.172285 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:27:51 crc kubenswrapper[5014]: E0228 05:27:51.174229 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:28:00 crc kubenswrapper[5014]: I0228 05:28:00.181967 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537608-j482h"] Feb 28 05:28:00 crc kubenswrapper[5014]: E0228 05:28:00.182771 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e05f68aa-af01-41ba-8a22-d00ebc65ff38" containerName="extract-utilities" Feb 28 05:28:00 crc kubenswrapper[5014]: I0228 05:28:00.182783 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="e05f68aa-af01-41ba-8a22-d00ebc65ff38" containerName="extract-utilities" Feb 28 05:28:00 crc kubenswrapper[5014]: E0228 05:28:00.182876 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e05f68aa-af01-41ba-8a22-d00ebc65ff38" containerName="registry-server" Feb 28 05:28:00 crc kubenswrapper[5014]: I0228 05:28:00.182883 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="e05f68aa-af01-41ba-8a22-d00ebc65ff38" containerName="registry-server" Feb 28 05:28:00 crc kubenswrapper[5014]: E0228 05:28:00.182899 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e05f68aa-af01-41ba-8a22-d00ebc65ff38" containerName="extract-content" Feb 28 05:28:00 crc kubenswrapper[5014]: I0228 05:28:00.182905 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="e05f68aa-af01-41ba-8a22-d00ebc65ff38" containerName="extract-content" Feb 28 05:28:00 crc kubenswrapper[5014]: I0228 05:28:00.183074 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="e05f68aa-af01-41ba-8a22-d00ebc65ff38" containerName="registry-server" Feb 28 05:28:00 crc kubenswrapper[5014]: I0228 05:28:00.183619 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537608-j482h" Feb 28 05:28:00 crc kubenswrapper[5014]: I0228 05:28:00.185861 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:28:00 crc kubenswrapper[5014]: I0228 05:28:00.185986 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:28:00 crc kubenswrapper[5014]: I0228 05:28:00.186186 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:28:00 crc kubenswrapper[5014]: I0228 05:28:00.207326 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537608-j482h"] Feb 28 05:28:00 crc kubenswrapper[5014]: I0228 05:28:00.313649 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x84p\" (UniqueName: \"kubernetes.io/projected/13789da7-2c4d-4304-97da-be6aae8dadaa-kube-api-access-6x84p\") pod \"auto-csr-approver-29537608-j482h\" (UID: \"13789da7-2c4d-4304-97da-be6aae8dadaa\") " pod="openshift-infra/auto-csr-approver-29537608-j482h" Feb 28 05:28:00 crc kubenswrapper[5014]: I0228 05:28:00.416302 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x84p\" (UniqueName: \"kubernetes.io/projected/13789da7-2c4d-4304-97da-be6aae8dadaa-kube-api-access-6x84p\") pod \"auto-csr-approver-29537608-j482h\" (UID: \"13789da7-2c4d-4304-97da-be6aae8dadaa\") " pod="openshift-infra/auto-csr-approver-29537608-j482h" Feb 28 05:28:00 crc kubenswrapper[5014]: I0228 05:28:00.435994 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x84p\" (UniqueName: \"kubernetes.io/projected/13789da7-2c4d-4304-97da-be6aae8dadaa-kube-api-access-6x84p\") pod \"auto-csr-approver-29537608-j482h\" (UID: \"13789da7-2c4d-4304-97da-be6aae8dadaa\") " pod="openshift-infra/auto-csr-approver-29537608-j482h" Feb 28 05:28:00 crc kubenswrapper[5014]: I0228 05:28:00.509530 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537608-j482h" Feb 28 05:28:00 crc kubenswrapper[5014]: I0228 05:28:00.916165 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537608-j482h"] Feb 28 05:28:01 crc kubenswrapper[5014]: I0228 05:28:01.008430 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537608-j482h" event={"ID":"13789da7-2c4d-4304-97da-be6aae8dadaa","Type":"ContainerStarted","Data":"80dc3bad61ea3aa88763aefcf9abc7d7b7000c5319e5b699f5e86368564d72c9"} Feb 28 05:28:05 crc kubenswrapper[5014]: I0228 05:28:05.057496 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537608-j482h" event={"ID":"13789da7-2c4d-4304-97da-be6aae8dadaa","Type":"ContainerStarted","Data":"bc388b30d32b496dabb8b065563b1d7641bf18b5022e7568453e4f3110ed1979"} Feb 28 05:28:05 crc kubenswrapper[5014]: I0228 05:28:05.171679 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:28:05 crc kubenswrapper[5014]: E0228 05:28:05.172322 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:28:06 crc kubenswrapper[5014]: I0228 05:28:06.080041 5014 generic.go:334] "Generic (PLEG): container finished" podID="13789da7-2c4d-4304-97da-be6aae8dadaa" containerID="bc388b30d32b496dabb8b065563b1d7641bf18b5022e7568453e4f3110ed1979" exitCode=0 Feb 28 05:28:06 crc kubenswrapper[5014]: I0228 05:28:06.080087 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537608-j482h" event={"ID":"13789da7-2c4d-4304-97da-be6aae8dadaa","Type":"ContainerDied","Data":"bc388b30d32b496dabb8b065563b1d7641bf18b5022e7568453e4f3110ed1979"} Feb 28 05:28:07 crc kubenswrapper[5014]: I0228 05:28:07.585171 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537608-j482h" Feb 28 05:28:07 crc kubenswrapper[5014]: I0228 05:28:07.668908 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x84p\" (UniqueName: \"kubernetes.io/projected/13789da7-2c4d-4304-97da-be6aae8dadaa-kube-api-access-6x84p\") pod \"13789da7-2c4d-4304-97da-be6aae8dadaa\" (UID: \"13789da7-2c4d-4304-97da-be6aae8dadaa\") " Feb 28 05:28:07 crc kubenswrapper[5014]: I0228 05:28:07.675030 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13789da7-2c4d-4304-97da-be6aae8dadaa-kube-api-access-6x84p" (OuterVolumeSpecName: "kube-api-access-6x84p") pod "13789da7-2c4d-4304-97da-be6aae8dadaa" (UID: "13789da7-2c4d-4304-97da-be6aae8dadaa"). InnerVolumeSpecName "kube-api-access-6x84p". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:28:07 crc kubenswrapper[5014]: I0228 05:28:07.771283 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x84p\" (UniqueName: \"kubernetes.io/projected/13789da7-2c4d-4304-97da-be6aae8dadaa-kube-api-access-6x84p\") on node \"crc\" DevicePath \"\"" Feb 28 05:28:08 crc kubenswrapper[5014]: I0228 05:28:08.098630 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537608-j482h" event={"ID":"13789da7-2c4d-4304-97da-be6aae8dadaa","Type":"ContainerDied","Data":"80dc3bad61ea3aa88763aefcf9abc7d7b7000c5319e5b699f5e86368564d72c9"} Feb 28 05:28:08 crc kubenswrapper[5014]: I0228 05:28:08.098674 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80dc3bad61ea3aa88763aefcf9abc7d7b7000c5319e5b699f5e86368564d72c9" Feb 28 05:28:08 crc kubenswrapper[5014]: I0228 05:28:08.098684 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537608-j482h" Feb 28 05:28:08 crc kubenswrapper[5014]: I0228 05:28:08.660848 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537602-wgkjt"] Feb 28 05:28:08 crc kubenswrapper[5014]: I0228 05:28:08.671396 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537602-wgkjt"] Feb 28 05:28:10 crc kubenswrapper[5014]: I0228 05:28:10.182138 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33996db3-13d8-4fc6-a95e-de8b2582ddf7" path="/var/lib/kubelet/pods/33996db3-13d8-4fc6-a95e-de8b2582ddf7/volumes" Feb 28 05:28:17 crc kubenswrapper[5014]: I0228 05:28:17.172249 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:28:17 crc kubenswrapper[5014]: E0228 05:28:17.173006 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:28:28 crc kubenswrapper[5014]: I0228 05:28:28.172423 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:28:28 crc kubenswrapper[5014]: E0228 05:28:28.173611 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:28:43 crc kubenswrapper[5014]: I0228 05:28:43.171589 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:28:43 crc kubenswrapper[5014]: E0228 05:28:43.172496 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:28:58 crc kubenswrapper[5014]: I0228 05:28:58.172333 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:28:58 crc kubenswrapper[5014]: E0228 05:28:58.173067 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:28:59 crc kubenswrapper[5014]: I0228 05:28:59.028980 5014 scope.go:117] "RemoveContainer" containerID="a4cb4a8486521786d2b9f49032687480ceda944606063326dbeb1ab6411b7726" Feb 28 05:29:13 crc kubenswrapper[5014]: I0228 05:29:13.172378 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:29:13 crc kubenswrapper[5014]: E0228 05:29:13.173357 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:29:26 crc kubenswrapper[5014]: I0228 05:29:26.171651 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:29:26 crc kubenswrapper[5014]: E0228 05:29:26.172450 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:29:28 crc kubenswrapper[5014]: I0228 05:29:28.848651 5014 generic.go:334] "Generic (PLEG): container finished" podID="2db9b9b7-c55d-4b8b-b51b-cd081afed742" containerID="0784ead89c86d88464af06b257b8c6833b140cddf6dedbfa095feebd3952a93e" exitCode=0 Feb 28 05:29:28 crc kubenswrapper[5014]: I0228 05:29:28.848851 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2db9b9b7-c55d-4b8b-b51b-cd081afed742","Type":"ContainerDied","Data":"0784ead89c86d88464af06b257b8c6833b140cddf6dedbfa095feebd3952a93e"} Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.319250 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.358343 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2db9b9b7-c55d-4b8b-b51b-cd081afed742-test-operator-ephemeral-temporary\") pod \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.358432 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2db9b9b7-c55d-4b8b-b51b-cd081afed742-ssh-key\") pod \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.358489 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2db9b9b7-c55d-4b8b-b51b-cd081afed742-openstack-config\") pod \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.358516 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwr7s\" (UniqueName: \"kubernetes.io/projected/2db9b9b7-c55d-4b8b-b51b-cd081afed742-kube-api-access-kwr7s\") pod \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.358551 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2db9b9b7-c55d-4b8b-b51b-cd081afed742-config-data\") pod \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.358599 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.358630 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2db9b9b7-c55d-4b8b-b51b-cd081afed742-openstack-config-secret\") pod \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.358655 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2db9b9b7-c55d-4b8b-b51b-cd081afed742-ca-certs\") pod \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.358716 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2db9b9b7-c55d-4b8b-b51b-cd081afed742-test-operator-ephemeral-workdir\") pod \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\" (UID: \"2db9b9b7-c55d-4b8b-b51b-cd081afed742\") " Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.359845 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2db9b9b7-c55d-4b8b-b51b-cd081afed742-config-data" (OuterVolumeSpecName: "config-data") pod "2db9b9b7-c55d-4b8b-b51b-cd081afed742" (UID: "2db9b9b7-c55d-4b8b-b51b-cd081afed742"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.360221 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2db9b9b7-c55d-4b8b-b51b-cd081afed742-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "2db9b9b7-c55d-4b8b-b51b-cd081afed742" (UID: "2db9b9b7-c55d-4b8b-b51b-cd081afed742"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.363237 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2db9b9b7-c55d-4b8b-b51b-cd081afed742-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "2db9b9b7-c55d-4b8b-b51b-cd081afed742" (UID: "2db9b9b7-c55d-4b8b-b51b-cd081afed742"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.365872 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "2db9b9b7-c55d-4b8b-b51b-cd081afed742" (UID: "2db9b9b7-c55d-4b8b-b51b-cd081afed742"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.366847 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2db9b9b7-c55d-4b8b-b51b-cd081afed742-kube-api-access-kwr7s" (OuterVolumeSpecName: "kube-api-access-kwr7s") pod "2db9b9b7-c55d-4b8b-b51b-cd081afed742" (UID: "2db9b9b7-c55d-4b8b-b51b-cd081afed742"). InnerVolumeSpecName "kube-api-access-kwr7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.388826 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2db9b9b7-c55d-4b8b-b51b-cd081afed742-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "2db9b9b7-c55d-4b8b-b51b-cd081afed742" (UID: "2db9b9b7-c55d-4b8b-b51b-cd081afed742"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.397847 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2db9b9b7-c55d-4b8b-b51b-cd081afed742-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "2db9b9b7-c55d-4b8b-b51b-cd081afed742" (UID: "2db9b9b7-c55d-4b8b-b51b-cd081afed742"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.400459 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2db9b9b7-c55d-4b8b-b51b-cd081afed742-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2db9b9b7-c55d-4b8b-b51b-cd081afed742" (UID: "2db9b9b7-c55d-4b8b-b51b-cd081afed742"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.415093 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2db9b9b7-c55d-4b8b-b51b-cd081afed742-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "2db9b9b7-c55d-4b8b-b51b-cd081afed742" (UID: "2db9b9b7-c55d-4b8b-b51b-cd081afed742"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.461412 5014 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/2db9b9b7-c55d-4b8b-b51b-cd081afed742-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.461457 5014 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/2db9b9b7-c55d-4b8b-b51b-cd081afed742-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.461471 5014 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/2db9b9b7-c55d-4b8b-b51b-cd081afed742-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.461485 5014 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2db9b9b7-c55d-4b8b-b51b-cd081afed742-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.461500 5014 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2db9b9b7-c55d-4b8b-b51b-cd081afed742-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.461513 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwr7s\" (UniqueName: \"kubernetes.io/projected/2db9b9b7-c55d-4b8b-b51b-cd081afed742-kube-api-access-kwr7s\") on node \"crc\" DevicePath \"\"" Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.461524 5014 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/2db9b9b7-c55d-4b8b-b51b-cd081afed742-config-data\") on node \"crc\" DevicePath \"\"" Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.461564 5014 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.461578 5014 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2db9b9b7-c55d-4b8b-b51b-cd081afed742-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.485569 5014 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.562293 5014 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.870105 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"2db9b9b7-c55d-4b8b-b51b-cd081afed742","Type":"ContainerDied","Data":"e1cd5bb249ebb17c0528846b96b7206a820d090637209bd8a5a61c076bbe489b"} Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.870146 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1cd5bb249ebb17c0528846b96b7206a820d090637209bd8a5a61c076bbe489b" Feb 28 05:29:30 crc kubenswrapper[5014]: I0228 05:29:30.870195 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 28 05:29:34 crc kubenswrapper[5014]: I0228 05:29:34.973718 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 28 05:29:34 crc kubenswrapper[5014]: E0228 05:29:34.974378 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2db9b9b7-c55d-4b8b-b51b-cd081afed742" containerName="tempest-tests-tempest-tests-runner" Feb 28 05:29:34 crc kubenswrapper[5014]: I0228 05:29:34.974389 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="2db9b9b7-c55d-4b8b-b51b-cd081afed742" containerName="tempest-tests-tempest-tests-runner" Feb 28 05:29:34 crc kubenswrapper[5014]: E0228 05:29:34.974402 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13789da7-2c4d-4304-97da-be6aae8dadaa" containerName="oc" Feb 28 05:29:34 crc kubenswrapper[5014]: I0228 05:29:34.974408 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="13789da7-2c4d-4304-97da-be6aae8dadaa" containerName="oc" Feb 28 05:29:34 crc kubenswrapper[5014]: I0228 05:29:34.974566 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="13789da7-2c4d-4304-97da-be6aae8dadaa" containerName="oc" Feb 28 05:29:34 crc kubenswrapper[5014]: I0228 05:29:34.974577 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="2db9b9b7-c55d-4b8b-b51b-cd081afed742" containerName="tempest-tests-tempest-tests-runner" Feb 28 05:29:34 crc kubenswrapper[5014]: I0228 05:29:34.975710 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 28 05:29:34 crc kubenswrapper[5014]: I0228 05:29:34.975783 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 05:29:34 crc kubenswrapper[5014]: I0228 05:29:34.997158 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-l2zhb" Feb 28 05:29:35 crc kubenswrapper[5014]: I0228 05:29:35.155861 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"729e0ea7-49de-4e76-9921-8911ce80452e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 05:29:35 crc kubenswrapper[5014]: I0228 05:29:35.155995 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scrvw\" (UniqueName: \"kubernetes.io/projected/729e0ea7-49de-4e76-9921-8911ce80452e-kube-api-access-scrvw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"729e0ea7-49de-4e76-9921-8911ce80452e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 05:29:35 crc kubenswrapper[5014]: I0228 05:29:35.257926 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"729e0ea7-49de-4e76-9921-8911ce80452e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 05:29:35 crc kubenswrapper[5014]: I0228 05:29:35.258070 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scrvw\" (UniqueName: \"kubernetes.io/projected/729e0ea7-49de-4e76-9921-8911ce80452e-kube-api-access-scrvw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"729e0ea7-49de-4e76-9921-8911ce80452e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 05:29:35 crc kubenswrapper[5014]: I0228 05:29:35.258506 5014 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"729e0ea7-49de-4e76-9921-8911ce80452e\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 05:29:35 crc kubenswrapper[5014]: I0228 05:29:35.278728 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scrvw\" (UniqueName: \"kubernetes.io/projected/729e0ea7-49de-4e76-9921-8911ce80452e-kube-api-access-scrvw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"729e0ea7-49de-4e76-9921-8911ce80452e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 05:29:35 crc kubenswrapper[5014]: I0228 05:29:35.296571 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"729e0ea7-49de-4e76-9921-8911ce80452e\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 05:29:35 crc kubenswrapper[5014]: I0228 05:29:35.320141 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 28 05:29:35 crc kubenswrapper[5014]: I0228 05:29:35.773530 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 28 05:29:35 crc kubenswrapper[5014]: I0228 05:29:35.775975 5014 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 05:29:35 crc kubenswrapper[5014]: I0228 05:29:35.919600 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"729e0ea7-49de-4e76-9921-8911ce80452e","Type":"ContainerStarted","Data":"ae88018b1d5c69d70fa0e81f4cd292c32816221f01b11847714febabb9302a5b"} Feb 28 05:29:36 crc kubenswrapper[5014]: I0228 05:29:36.931735 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"729e0ea7-49de-4e76-9921-8911ce80452e","Type":"ContainerStarted","Data":"3f08500b187804cfef38099c13de796c69b74c70384fb1fd2a8fac125d590fb2"} Feb 28 05:29:36 crc kubenswrapper[5014]: I0228 05:29:36.959569 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.085014567 podStartE2EDuration="2.959548563s" podCreationTimestamp="2026-02-28 05:29:34 +0000 UTC" firstStartedPulling="2026-02-28 05:29:35.775711345 +0000 UTC m=+3364.445837255" lastFinishedPulling="2026-02-28 05:29:36.650245321 +0000 UTC m=+3365.320371251" observedRunningTime="2026-02-28 05:29:36.950498978 +0000 UTC m=+3365.620624928" watchObservedRunningTime="2026-02-28 05:29:36.959548563 +0000 UTC m=+3365.629674483" Feb 28 05:29:38 crc kubenswrapper[5014]: I0228 05:29:38.172322 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:29:38 crc kubenswrapper[5014]: E0228 05:29:38.172858 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:29:51 crc kubenswrapper[5014]: I0228 05:29:51.172680 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:29:51 crc kubenswrapper[5014]: E0228 05:29:51.173428 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:29:57 crc kubenswrapper[5014]: I0228 05:29:57.964375 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8292t/must-gather-2pvrl"] Feb 28 05:29:57 crc kubenswrapper[5014]: I0228 05:29:57.974681 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8292t/must-gather-2pvrl" Feb 28 05:29:57 crc kubenswrapper[5014]: I0228 05:29:57.978235 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8292t"/"openshift-service-ca.crt" Feb 28 05:29:57 crc kubenswrapper[5014]: I0228 05:29:57.981469 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8292t"/"kube-root-ca.crt" Feb 28 05:29:57 crc kubenswrapper[5014]: I0228 05:29:57.998200 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8292t/must-gather-2pvrl"] Feb 28 05:29:58 crc kubenswrapper[5014]: I0228 05:29:58.060989 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56tl6\" (UniqueName: \"kubernetes.io/projected/45f9a71c-0aad-4b18-97d8-fd99506da883-kube-api-access-56tl6\") pod \"must-gather-2pvrl\" (UID: \"45f9a71c-0aad-4b18-97d8-fd99506da883\") " pod="openshift-must-gather-8292t/must-gather-2pvrl" Feb 28 05:29:58 crc kubenswrapper[5014]: I0228 05:29:58.061105 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/45f9a71c-0aad-4b18-97d8-fd99506da883-must-gather-output\") pod \"must-gather-2pvrl\" (UID: \"45f9a71c-0aad-4b18-97d8-fd99506da883\") " pod="openshift-must-gather-8292t/must-gather-2pvrl" Feb 28 05:29:58 crc kubenswrapper[5014]: I0228 05:29:58.162579 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/45f9a71c-0aad-4b18-97d8-fd99506da883-must-gather-output\") pod \"must-gather-2pvrl\" (UID: \"45f9a71c-0aad-4b18-97d8-fd99506da883\") " pod="openshift-must-gather-8292t/must-gather-2pvrl" Feb 28 05:29:58 crc kubenswrapper[5014]: I0228 05:29:58.162720 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56tl6\" (UniqueName: \"kubernetes.io/projected/45f9a71c-0aad-4b18-97d8-fd99506da883-kube-api-access-56tl6\") pod \"must-gather-2pvrl\" (UID: \"45f9a71c-0aad-4b18-97d8-fd99506da883\") " pod="openshift-must-gather-8292t/must-gather-2pvrl" Feb 28 05:29:58 crc kubenswrapper[5014]: I0228 05:29:58.163045 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/45f9a71c-0aad-4b18-97d8-fd99506da883-must-gather-output\") pod \"must-gather-2pvrl\" (UID: \"45f9a71c-0aad-4b18-97d8-fd99506da883\") " pod="openshift-must-gather-8292t/must-gather-2pvrl" Feb 28 05:29:58 crc kubenswrapper[5014]: I0228 05:29:58.182528 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56tl6\" (UniqueName: \"kubernetes.io/projected/45f9a71c-0aad-4b18-97d8-fd99506da883-kube-api-access-56tl6\") pod \"must-gather-2pvrl\" (UID: \"45f9a71c-0aad-4b18-97d8-fd99506da883\") " pod="openshift-must-gather-8292t/must-gather-2pvrl" Feb 28 05:29:58 crc kubenswrapper[5014]: I0228 05:29:58.298262 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8292t/must-gather-2pvrl" Feb 28 05:29:58 crc kubenswrapper[5014]: I0228 05:29:58.850428 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8292t/must-gather-2pvrl"] Feb 28 05:29:59 crc kubenswrapper[5014]: I0228 05:29:59.141910 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8292t/must-gather-2pvrl" event={"ID":"45f9a71c-0aad-4b18-97d8-fd99506da883","Type":"ContainerStarted","Data":"46df0e1c28fc44f78cfac5c1522eaf33774a13c6b265f47ffc989cbb4a579e91"} Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.149118 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537610-wvrbh"] Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.152320 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537610-wvrbh" Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.154563 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.154697 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.154761 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.168583 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537610-svqjd"] Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.169947 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537610-svqjd" Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.172823 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.173002 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.194455 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537610-wvrbh"] Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.214631 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537610-svqjd"] Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.307714 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce70dcc4-fda6-4c98-b54e-c7b813f44c12-secret-volume\") pod \"collect-profiles-29537610-svqjd\" (UID: \"ce70dcc4-fda6-4c98-b54e-c7b813f44c12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537610-svqjd" Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.307908 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce70dcc4-fda6-4c98-b54e-c7b813f44c12-config-volume\") pod \"collect-profiles-29537610-svqjd\" (UID: \"ce70dcc4-fda6-4c98-b54e-c7b813f44c12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537610-svqjd" Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.308140 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlk8h\" (UniqueName: \"kubernetes.io/projected/ce70dcc4-fda6-4c98-b54e-c7b813f44c12-kube-api-access-mlk8h\") pod \"collect-profiles-29537610-svqjd\" (UID: \"ce70dcc4-fda6-4c98-b54e-c7b813f44c12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537610-svqjd" Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.308428 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9v5h\" (UniqueName: \"kubernetes.io/projected/b81bdff3-95bd-45c0-912d-524b27981fd5-kube-api-access-g9v5h\") pod \"auto-csr-approver-29537610-wvrbh\" (UID: \"b81bdff3-95bd-45c0-912d-524b27981fd5\") " pod="openshift-infra/auto-csr-approver-29537610-wvrbh" Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.410280 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9v5h\" (UniqueName: \"kubernetes.io/projected/b81bdff3-95bd-45c0-912d-524b27981fd5-kube-api-access-g9v5h\") pod \"auto-csr-approver-29537610-wvrbh\" (UID: \"b81bdff3-95bd-45c0-912d-524b27981fd5\") " pod="openshift-infra/auto-csr-approver-29537610-wvrbh" Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.410385 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce70dcc4-fda6-4c98-b54e-c7b813f44c12-secret-volume\") pod \"collect-profiles-29537610-svqjd\" (UID: \"ce70dcc4-fda6-4c98-b54e-c7b813f44c12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537610-svqjd" Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.410443 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce70dcc4-fda6-4c98-b54e-c7b813f44c12-config-volume\") pod \"collect-profiles-29537610-svqjd\" (UID: \"ce70dcc4-fda6-4c98-b54e-c7b813f44c12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537610-svqjd" Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.410594 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlk8h\" (UniqueName: \"kubernetes.io/projected/ce70dcc4-fda6-4c98-b54e-c7b813f44c12-kube-api-access-mlk8h\") pod \"collect-profiles-29537610-svqjd\" (UID: \"ce70dcc4-fda6-4c98-b54e-c7b813f44c12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537610-svqjd" Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.411426 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce70dcc4-fda6-4c98-b54e-c7b813f44c12-config-volume\") pod \"collect-profiles-29537610-svqjd\" (UID: \"ce70dcc4-fda6-4c98-b54e-c7b813f44c12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537610-svqjd" Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.416890 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce70dcc4-fda6-4c98-b54e-c7b813f44c12-secret-volume\") pod \"collect-profiles-29537610-svqjd\" (UID: \"ce70dcc4-fda6-4c98-b54e-c7b813f44c12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537610-svqjd" Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.427122 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlk8h\" (UniqueName: \"kubernetes.io/projected/ce70dcc4-fda6-4c98-b54e-c7b813f44c12-kube-api-access-mlk8h\") pod \"collect-profiles-29537610-svqjd\" (UID: \"ce70dcc4-fda6-4c98-b54e-c7b813f44c12\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537610-svqjd" Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.439533 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9v5h\" (UniqueName: \"kubernetes.io/projected/b81bdff3-95bd-45c0-912d-524b27981fd5-kube-api-access-g9v5h\") pod \"auto-csr-approver-29537610-wvrbh\" (UID: \"b81bdff3-95bd-45c0-912d-524b27981fd5\") " pod="openshift-infra/auto-csr-approver-29537610-wvrbh" Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.471649 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537610-wvrbh" Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.497522 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537610-svqjd" Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.954471 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537610-wvrbh"] Feb 28 05:30:00 crc kubenswrapper[5014]: I0228 05:30:00.965450 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537610-svqjd"] Feb 28 05:30:03 crc kubenswrapper[5014]: I0228 05:30:03.171255 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:30:03 crc kubenswrapper[5014]: E0228 05:30:03.171740 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:30:04 crc kubenswrapper[5014]: W0228 05:30:04.802279 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb81bdff3_95bd_45c0_912d_524b27981fd5.slice/crio-d600dc1bb04781b0761056ed644d4f79141e7520a86b8034e18a4f73a6ae5c79 WatchSource:0}: Error finding container d600dc1bb04781b0761056ed644d4f79141e7520a86b8034e18a4f73a6ae5c79: Status 404 returned error can't find the container with id d600dc1bb04781b0761056ed644d4f79141e7520a86b8034e18a4f73a6ae5c79 Feb 28 05:30:05 crc kubenswrapper[5014]: I0228 05:30:05.213304 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537610-svqjd" event={"ID":"ce70dcc4-fda6-4c98-b54e-c7b813f44c12","Type":"ContainerStarted","Data":"c282dce3066b10dad44448a79ba3c895ef4d635759783f085a4904a68c00c875"} Feb 28 05:30:05 crc kubenswrapper[5014]: I0228 05:30:05.213567 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537610-svqjd" event={"ID":"ce70dcc4-fda6-4c98-b54e-c7b813f44c12","Type":"ContainerStarted","Data":"afea9e681e3f4786a071f0845f7e441c94481b7e1502a6902d5f4ef5e4e9fa00"} Feb 28 05:30:05 crc kubenswrapper[5014]: I0228 05:30:05.215429 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537610-wvrbh" event={"ID":"b81bdff3-95bd-45c0-912d-524b27981fd5","Type":"ContainerStarted","Data":"d600dc1bb04781b0761056ed644d4f79141e7520a86b8034e18a4f73a6ae5c79"} Feb 28 05:30:05 crc kubenswrapper[5014]: I0228 05:30:05.218459 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8292t/must-gather-2pvrl" event={"ID":"45f9a71c-0aad-4b18-97d8-fd99506da883","Type":"ContainerStarted","Data":"3084376352eb2b0687c356ce3d8809c084dc9f9c290a662b0d6a13b269867986"} Feb 28 05:30:05 crc kubenswrapper[5014]: I0228 05:30:05.237076 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29537610-svqjd" podStartSLOduration=5.237056511 podStartE2EDuration="5.237056511s" podCreationTimestamp="2026-02-28 05:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 05:30:05.22819528 +0000 UTC m=+3393.898321190" watchObservedRunningTime="2026-02-28 05:30:05.237056511 +0000 UTC m=+3393.907182421" Feb 28 05:30:06 crc kubenswrapper[5014]: I0228 05:30:06.228123 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537610-wvrbh" event={"ID":"b81bdff3-95bd-45c0-912d-524b27981fd5","Type":"ContainerStarted","Data":"56692b4c1bc0176a424a9e82eb23a903f167eb23b29da5c8ba9d67719d8ae16d"} Feb 28 05:30:06 crc kubenswrapper[5014]: I0228 05:30:06.229953 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8292t/must-gather-2pvrl" event={"ID":"45f9a71c-0aad-4b18-97d8-fd99506da883","Type":"ContainerStarted","Data":"7889b55b1d2e3a305aff044340afb37a9ce4090e0a94bd24a1c5342b9c5fa54f"} Feb 28 05:30:06 crc kubenswrapper[5014]: I0228 05:30:06.231404 5014 generic.go:334] "Generic (PLEG): container finished" podID="ce70dcc4-fda6-4c98-b54e-c7b813f44c12" containerID="c282dce3066b10dad44448a79ba3c895ef4d635759783f085a4904a68c00c875" exitCode=0 Feb 28 05:30:06 crc kubenswrapper[5014]: I0228 05:30:06.231432 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537610-svqjd" event={"ID":"ce70dcc4-fda6-4c98-b54e-c7b813f44c12","Type":"ContainerDied","Data":"c282dce3066b10dad44448a79ba3c895ef4d635759783f085a4904a68c00c875"} Feb 28 05:30:06 crc kubenswrapper[5014]: I0228 05:30:06.266698 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537610-wvrbh" podStartSLOduration=5.269234734 podStartE2EDuration="6.266671988s" podCreationTimestamp="2026-02-28 05:30:00 +0000 UTC" firstStartedPulling="2026-02-28 05:30:04.806230467 +0000 UTC m=+3393.476356377" lastFinishedPulling="2026-02-28 05:30:05.803667721 +0000 UTC m=+3394.473793631" observedRunningTime="2026-02-28 05:30:06.248470444 +0000 UTC m=+3394.918596354" watchObservedRunningTime="2026-02-28 05:30:06.266671988 +0000 UTC m=+3394.936797918" Feb 28 05:30:06 crc kubenswrapper[5014]: I0228 05:30:06.278143 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8292t/must-gather-2pvrl" podStartSLOduration=3.2439345 podStartE2EDuration="9.27812401s" podCreationTimestamp="2026-02-28 05:29:57 +0000 UTC" firstStartedPulling="2026-02-28 05:29:58.85494679 +0000 UTC m=+3387.525072700" lastFinishedPulling="2026-02-28 05:30:04.8891363 +0000 UTC m=+3393.559262210" observedRunningTime="2026-02-28 05:30:06.262972568 +0000 UTC m=+3394.933098478" watchObservedRunningTime="2026-02-28 05:30:06.27812401 +0000 UTC m=+3394.948249920" Feb 28 05:30:07 crc kubenswrapper[5014]: I0228 05:30:07.243500 5014 generic.go:334] "Generic (PLEG): container finished" podID="b81bdff3-95bd-45c0-912d-524b27981fd5" containerID="56692b4c1bc0176a424a9e82eb23a903f167eb23b29da5c8ba9d67719d8ae16d" exitCode=0 Feb 28 05:30:07 crc kubenswrapper[5014]: I0228 05:30:07.243560 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537610-wvrbh" event={"ID":"b81bdff3-95bd-45c0-912d-524b27981fd5","Type":"ContainerDied","Data":"56692b4c1bc0176a424a9e82eb23a903f167eb23b29da5c8ba9d67719d8ae16d"} Feb 28 05:30:07 crc kubenswrapper[5014]: I0228 05:30:07.585265 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537610-svqjd" Feb 28 05:30:07 crc kubenswrapper[5014]: I0228 05:30:07.660361 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce70dcc4-fda6-4c98-b54e-c7b813f44c12-config-volume\") pod \"ce70dcc4-fda6-4c98-b54e-c7b813f44c12\" (UID: \"ce70dcc4-fda6-4c98-b54e-c7b813f44c12\") " Feb 28 05:30:07 crc kubenswrapper[5014]: I0228 05:30:07.660830 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlk8h\" (UniqueName: \"kubernetes.io/projected/ce70dcc4-fda6-4c98-b54e-c7b813f44c12-kube-api-access-mlk8h\") pod \"ce70dcc4-fda6-4c98-b54e-c7b813f44c12\" (UID: \"ce70dcc4-fda6-4c98-b54e-c7b813f44c12\") " Feb 28 05:30:07 crc kubenswrapper[5014]: I0228 05:30:07.660930 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce70dcc4-fda6-4c98-b54e-c7b813f44c12-secret-volume\") pod \"ce70dcc4-fda6-4c98-b54e-c7b813f44c12\" (UID: \"ce70dcc4-fda6-4c98-b54e-c7b813f44c12\") " Feb 28 05:30:07 crc kubenswrapper[5014]: I0228 05:30:07.661491 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce70dcc4-fda6-4c98-b54e-c7b813f44c12-config-volume" (OuterVolumeSpecName: "config-volume") pod "ce70dcc4-fda6-4c98-b54e-c7b813f44c12" (UID: "ce70dcc4-fda6-4c98-b54e-c7b813f44c12"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 05:30:07 crc kubenswrapper[5014]: I0228 05:30:07.673775 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce70dcc4-fda6-4c98-b54e-c7b813f44c12-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ce70dcc4-fda6-4c98-b54e-c7b813f44c12" (UID: "ce70dcc4-fda6-4c98-b54e-c7b813f44c12"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:30:07 crc kubenswrapper[5014]: I0228 05:30:07.673892 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce70dcc4-fda6-4c98-b54e-c7b813f44c12-kube-api-access-mlk8h" (OuterVolumeSpecName: "kube-api-access-mlk8h") pod "ce70dcc4-fda6-4c98-b54e-c7b813f44c12" (UID: "ce70dcc4-fda6-4c98-b54e-c7b813f44c12"). InnerVolumeSpecName "kube-api-access-mlk8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:30:07 crc kubenswrapper[5014]: I0228 05:30:07.765618 5014 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce70dcc4-fda6-4c98-b54e-c7b813f44c12-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 05:30:07 crc kubenswrapper[5014]: I0228 05:30:07.766111 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlk8h\" (UniqueName: \"kubernetes.io/projected/ce70dcc4-fda6-4c98-b54e-c7b813f44c12-kube-api-access-mlk8h\") on node \"crc\" DevicePath \"\"" Feb 28 05:30:07 crc kubenswrapper[5014]: I0228 05:30:07.766132 5014 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce70dcc4-fda6-4c98-b54e-c7b813f44c12-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 05:30:08 crc kubenswrapper[5014]: I0228 05:30:08.253118 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537610-svqjd" Feb 28 05:30:08 crc kubenswrapper[5014]: I0228 05:30:08.253114 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537610-svqjd" event={"ID":"ce70dcc4-fda6-4c98-b54e-c7b813f44c12","Type":"ContainerDied","Data":"afea9e681e3f4786a071f0845f7e441c94481b7e1502a6902d5f4ef5e4e9fa00"} Feb 28 05:30:08 crc kubenswrapper[5014]: I0228 05:30:08.253165 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afea9e681e3f4786a071f0845f7e441c94481b7e1502a6902d5f4ef5e4e9fa00" Feb 28 05:30:08 crc kubenswrapper[5014]: I0228 05:30:08.312000 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537565-8xslg"] Feb 28 05:30:08 crc kubenswrapper[5014]: I0228 05:30:08.323222 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537565-8xslg"] Feb 28 05:30:08 crc kubenswrapper[5014]: I0228 05:30:08.617871 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537610-wvrbh" Feb 28 05:30:08 crc kubenswrapper[5014]: I0228 05:30:08.687571 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9v5h\" (UniqueName: \"kubernetes.io/projected/b81bdff3-95bd-45c0-912d-524b27981fd5-kube-api-access-g9v5h\") pod \"b81bdff3-95bd-45c0-912d-524b27981fd5\" (UID: \"b81bdff3-95bd-45c0-912d-524b27981fd5\") " Feb 28 05:30:08 crc kubenswrapper[5014]: I0228 05:30:08.692377 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b81bdff3-95bd-45c0-912d-524b27981fd5-kube-api-access-g9v5h" (OuterVolumeSpecName: "kube-api-access-g9v5h") pod "b81bdff3-95bd-45c0-912d-524b27981fd5" (UID: "b81bdff3-95bd-45c0-912d-524b27981fd5"). InnerVolumeSpecName "kube-api-access-g9v5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:30:08 crc kubenswrapper[5014]: I0228 05:30:08.790309 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9v5h\" (UniqueName: \"kubernetes.io/projected/b81bdff3-95bd-45c0-912d-524b27981fd5-kube-api-access-g9v5h\") on node \"crc\" DevicePath \"\"" Feb 28 05:30:08 crc kubenswrapper[5014]: I0228 05:30:08.855407 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8292t/crc-debug-n8gpp"] Feb 28 05:30:08 crc kubenswrapper[5014]: E0228 05:30:08.855868 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce70dcc4-fda6-4c98-b54e-c7b813f44c12" containerName="collect-profiles" Feb 28 05:30:08 crc kubenswrapper[5014]: I0228 05:30:08.855890 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce70dcc4-fda6-4c98-b54e-c7b813f44c12" containerName="collect-profiles" Feb 28 05:30:08 crc kubenswrapper[5014]: E0228 05:30:08.855933 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b81bdff3-95bd-45c0-912d-524b27981fd5" containerName="oc" Feb 28 05:30:08 crc kubenswrapper[5014]: I0228 05:30:08.855942 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="b81bdff3-95bd-45c0-912d-524b27981fd5" containerName="oc" Feb 28 05:30:08 crc kubenswrapper[5014]: I0228 05:30:08.856175 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="b81bdff3-95bd-45c0-912d-524b27981fd5" containerName="oc" Feb 28 05:30:08 crc kubenswrapper[5014]: I0228 05:30:08.856196 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce70dcc4-fda6-4c98-b54e-c7b813f44c12" containerName="collect-profiles" Feb 28 05:30:08 crc kubenswrapper[5014]: I0228 05:30:08.857064 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8292t/crc-debug-n8gpp" Feb 28 05:30:08 crc kubenswrapper[5014]: I0228 05:30:08.858939 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-8292t"/"default-dockercfg-8hmf2" Feb 28 05:30:08 crc kubenswrapper[5014]: I0228 05:30:08.995532 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md25f\" (UniqueName: \"kubernetes.io/projected/f34c4226-dd3b-468f-87ce-9ba9a552f815-kube-api-access-md25f\") pod \"crc-debug-n8gpp\" (UID: \"f34c4226-dd3b-468f-87ce-9ba9a552f815\") " pod="openshift-must-gather-8292t/crc-debug-n8gpp" Feb 28 05:30:08 crc kubenswrapper[5014]: I0228 05:30:08.996110 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f34c4226-dd3b-468f-87ce-9ba9a552f815-host\") pod \"crc-debug-n8gpp\" (UID: \"f34c4226-dd3b-468f-87ce-9ba9a552f815\") " pod="openshift-must-gather-8292t/crc-debug-n8gpp" Feb 28 05:30:09 crc kubenswrapper[5014]: I0228 05:30:09.097817 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md25f\" (UniqueName: \"kubernetes.io/projected/f34c4226-dd3b-468f-87ce-9ba9a552f815-kube-api-access-md25f\") pod \"crc-debug-n8gpp\" (UID: \"f34c4226-dd3b-468f-87ce-9ba9a552f815\") " pod="openshift-must-gather-8292t/crc-debug-n8gpp" Feb 28 05:30:09 crc kubenswrapper[5014]: I0228 05:30:09.097877 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f34c4226-dd3b-468f-87ce-9ba9a552f815-host\") pod \"crc-debug-n8gpp\" (UID: \"f34c4226-dd3b-468f-87ce-9ba9a552f815\") " pod="openshift-must-gather-8292t/crc-debug-n8gpp" Feb 28 05:30:09 crc kubenswrapper[5014]: I0228 05:30:09.098020 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f34c4226-dd3b-468f-87ce-9ba9a552f815-host\") pod \"crc-debug-n8gpp\" (UID: \"f34c4226-dd3b-468f-87ce-9ba9a552f815\") " pod="openshift-must-gather-8292t/crc-debug-n8gpp" Feb 28 05:30:09 crc kubenswrapper[5014]: I0228 05:30:09.121606 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md25f\" (UniqueName: \"kubernetes.io/projected/f34c4226-dd3b-468f-87ce-9ba9a552f815-kube-api-access-md25f\") pod \"crc-debug-n8gpp\" (UID: \"f34c4226-dd3b-468f-87ce-9ba9a552f815\") " pod="openshift-must-gather-8292t/crc-debug-n8gpp" Feb 28 05:30:09 crc kubenswrapper[5014]: I0228 05:30:09.173870 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8292t/crc-debug-n8gpp" Feb 28 05:30:09 crc kubenswrapper[5014]: W0228 05:30:09.206035 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf34c4226_dd3b_468f_87ce_9ba9a552f815.slice/crio-4b3319770bb506c353678256056de7ea7e8a5185a2b5b6f697e5a65f62dd82b6 WatchSource:0}: Error finding container 4b3319770bb506c353678256056de7ea7e8a5185a2b5b6f697e5a65f62dd82b6: Status 404 returned error can't find the container with id 4b3319770bb506c353678256056de7ea7e8a5185a2b5b6f697e5a65f62dd82b6 Feb 28 05:30:09 crc kubenswrapper[5014]: I0228 05:30:09.263743 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8292t/crc-debug-n8gpp" event={"ID":"f34c4226-dd3b-468f-87ce-9ba9a552f815","Type":"ContainerStarted","Data":"4b3319770bb506c353678256056de7ea7e8a5185a2b5b6f697e5a65f62dd82b6"} Feb 28 05:30:09 crc kubenswrapper[5014]: I0228 05:30:09.266487 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537610-wvrbh" event={"ID":"b81bdff3-95bd-45c0-912d-524b27981fd5","Type":"ContainerDied","Data":"d600dc1bb04781b0761056ed644d4f79141e7520a86b8034e18a4f73a6ae5c79"} Feb 28 05:30:09 crc kubenswrapper[5014]: I0228 05:30:09.266594 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d600dc1bb04781b0761056ed644d4f79141e7520a86b8034e18a4f73a6ae5c79" Feb 28 05:30:09 crc kubenswrapper[5014]: I0228 05:30:09.266715 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537610-wvrbh" Feb 28 05:30:09 crc kubenswrapper[5014]: I0228 05:30:09.334373 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537604-fw4zg"] Feb 28 05:30:09 crc kubenswrapper[5014]: I0228 05:30:09.345652 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537604-fw4zg"] Feb 28 05:30:10 crc kubenswrapper[5014]: I0228 05:30:10.188654 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54ef16f6-0d33-49bd-ac7d-b1c484fcf531" path="/var/lib/kubelet/pods/54ef16f6-0d33-49bd-ac7d-b1c484fcf531/volumes" Feb 28 05:30:10 crc kubenswrapper[5014]: I0228 05:30:10.189703 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="818c228c-0c91-4cb1-b010-40746252c8ee" path="/var/lib/kubelet/pods/818c228c-0c91-4cb1-b010-40746252c8ee/volumes" Feb 28 05:30:18 crc kubenswrapper[5014]: I0228 05:30:18.172073 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:30:22 crc kubenswrapper[5014]: I0228 05:30:22.387761 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerStarted","Data":"ab09a27e103d3311268f3f8870f394e7de849ef8d8bdc4ab745c03fb930a3cfb"} Feb 28 05:30:22 crc kubenswrapper[5014]: I0228 05:30:22.394995 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8292t/crc-debug-n8gpp" event={"ID":"f34c4226-dd3b-468f-87ce-9ba9a552f815","Type":"ContainerStarted","Data":"7c6d02d2200cbcb3eef548d6d0dde8cd3d896e212e368522b145cd2942bb7e7b"} Feb 28 05:30:22 crc kubenswrapper[5014]: I0228 05:30:22.431799 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8292t/crc-debug-n8gpp" podStartSLOduration=2.375698234 podStartE2EDuration="14.431781731s" podCreationTimestamp="2026-02-28 05:30:08 +0000 UTC" firstStartedPulling="2026-02-28 05:30:09.207551213 +0000 UTC m=+3397.877677123" lastFinishedPulling="2026-02-28 05:30:21.26363471 +0000 UTC m=+3409.933760620" observedRunningTime="2026-02-28 05:30:22.42583151 +0000 UTC m=+3411.095957420" watchObservedRunningTime="2026-02-28 05:30:22.431781731 +0000 UTC m=+3411.101907641" Feb 28 05:30:42 crc kubenswrapper[5014]: I0228 05:30:42.369455 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ttz7x"] Feb 28 05:30:42 crc kubenswrapper[5014]: I0228 05:30:42.373310 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttz7x" Feb 28 05:30:42 crc kubenswrapper[5014]: I0228 05:30:42.413581 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ttz7x"] Feb 28 05:30:42 crc kubenswrapper[5014]: I0228 05:30:42.483692 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgljx\" (UniqueName: \"kubernetes.io/projected/14625278-d6fd-40ef-8090-65feb5e2306a-kube-api-access-bgljx\") pod \"community-operators-ttz7x\" (UID: \"14625278-d6fd-40ef-8090-65feb5e2306a\") " pod="openshift-marketplace/community-operators-ttz7x" Feb 28 05:30:42 crc kubenswrapper[5014]: I0228 05:30:42.483798 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14625278-d6fd-40ef-8090-65feb5e2306a-catalog-content\") pod \"community-operators-ttz7x\" (UID: \"14625278-d6fd-40ef-8090-65feb5e2306a\") " pod="openshift-marketplace/community-operators-ttz7x" Feb 28 05:30:42 crc kubenswrapper[5014]: I0228 05:30:42.483937 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14625278-d6fd-40ef-8090-65feb5e2306a-utilities\") pod \"community-operators-ttz7x\" (UID: \"14625278-d6fd-40ef-8090-65feb5e2306a\") " pod="openshift-marketplace/community-operators-ttz7x" Feb 28 05:30:42 crc kubenswrapper[5014]: I0228 05:30:42.585685 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14625278-d6fd-40ef-8090-65feb5e2306a-catalog-content\") pod \"community-operators-ttz7x\" (UID: \"14625278-d6fd-40ef-8090-65feb5e2306a\") " pod="openshift-marketplace/community-operators-ttz7x" Feb 28 05:30:42 crc kubenswrapper[5014]: I0228 05:30:42.585736 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14625278-d6fd-40ef-8090-65feb5e2306a-utilities\") pod \"community-operators-ttz7x\" (UID: \"14625278-d6fd-40ef-8090-65feb5e2306a\") " pod="openshift-marketplace/community-operators-ttz7x" Feb 28 05:30:42 crc kubenswrapper[5014]: I0228 05:30:42.585878 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgljx\" (UniqueName: \"kubernetes.io/projected/14625278-d6fd-40ef-8090-65feb5e2306a-kube-api-access-bgljx\") pod \"community-operators-ttz7x\" (UID: \"14625278-d6fd-40ef-8090-65feb5e2306a\") " pod="openshift-marketplace/community-operators-ttz7x" Feb 28 05:30:42 crc kubenswrapper[5014]: I0228 05:30:42.586450 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14625278-d6fd-40ef-8090-65feb5e2306a-catalog-content\") pod \"community-operators-ttz7x\" (UID: \"14625278-d6fd-40ef-8090-65feb5e2306a\") " pod="openshift-marketplace/community-operators-ttz7x" Feb 28 05:30:42 crc kubenswrapper[5014]: I0228 05:30:42.586554 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14625278-d6fd-40ef-8090-65feb5e2306a-utilities\") pod \"community-operators-ttz7x\" (UID: \"14625278-d6fd-40ef-8090-65feb5e2306a\") " pod="openshift-marketplace/community-operators-ttz7x" Feb 28 05:30:42 crc kubenswrapper[5014]: I0228 05:30:42.607594 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgljx\" (UniqueName: \"kubernetes.io/projected/14625278-d6fd-40ef-8090-65feb5e2306a-kube-api-access-bgljx\") pod \"community-operators-ttz7x\" (UID: \"14625278-d6fd-40ef-8090-65feb5e2306a\") " pod="openshift-marketplace/community-operators-ttz7x" Feb 28 05:30:42 crc kubenswrapper[5014]: I0228 05:30:42.689343 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttz7x" Feb 28 05:30:43 crc kubenswrapper[5014]: I0228 05:30:43.310414 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ttz7x"] Feb 28 05:30:43 crc kubenswrapper[5014]: I0228 05:30:43.583046 5014 generic.go:334] "Generic (PLEG): container finished" podID="14625278-d6fd-40ef-8090-65feb5e2306a" containerID="0c025eef977fc2c7b591b5147e40acdeb36aad1fd05371149184d9fcf036b86a" exitCode=0 Feb 28 05:30:43 crc kubenswrapper[5014]: I0228 05:30:43.583112 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttz7x" event={"ID":"14625278-d6fd-40ef-8090-65feb5e2306a","Type":"ContainerDied","Data":"0c025eef977fc2c7b591b5147e40acdeb36aad1fd05371149184d9fcf036b86a"} Feb 28 05:30:43 crc kubenswrapper[5014]: I0228 05:30:43.583141 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttz7x" event={"ID":"14625278-d6fd-40ef-8090-65feb5e2306a","Type":"ContainerStarted","Data":"4e4baa2147f5291d7e26b47cb440fe13350505dfe7649a817a4083f9bd42ad1a"} Feb 28 05:30:44 crc kubenswrapper[5014]: I0228 05:30:44.593935 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttz7x" event={"ID":"14625278-d6fd-40ef-8090-65feb5e2306a","Type":"ContainerStarted","Data":"15381d52c5dcc10383fe3d876c70a0f1aa1e19f65884e3c6c4e6dd52ff42f243"} Feb 28 05:30:45 crc kubenswrapper[5014]: I0228 05:30:45.605145 5014 generic.go:334] "Generic (PLEG): container finished" podID="14625278-d6fd-40ef-8090-65feb5e2306a" containerID="15381d52c5dcc10383fe3d876c70a0f1aa1e19f65884e3c6c4e6dd52ff42f243" exitCode=0 Feb 28 05:30:45 crc kubenswrapper[5014]: I0228 05:30:45.606673 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttz7x" event={"ID":"14625278-d6fd-40ef-8090-65feb5e2306a","Type":"ContainerDied","Data":"15381d52c5dcc10383fe3d876c70a0f1aa1e19f65884e3c6c4e6dd52ff42f243"} Feb 28 05:30:48 crc kubenswrapper[5014]: I0228 05:30:48.635297 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttz7x" event={"ID":"14625278-d6fd-40ef-8090-65feb5e2306a","Type":"ContainerStarted","Data":"4369ba3d30792087f5465daef977245740c35af8b7a8e7fb82a1131526b7ffef"} Feb 28 05:30:48 crc kubenswrapper[5014]: I0228 05:30:48.655418 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ttz7x" podStartSLOduration=2.851239154 podStartE2EDuration="6.655394759s" podCreationTimestamp="2026-02-28 05:30:42 +0000 UTC" firstStartedPulling="2026-02-28 05:30:43.585118202 +0000 UTC m=+3432.255244112" lastFinishedPulling="2026-02-28 05:30:47.389273807 +0000 UTC m=+3436.059399717" observedRunningTime="2026-02-28 05:30:48.649473328 +0000 UTC m=+3437.319599238" watchObservedRunningTime="2026-02-28 05:30:48.655394759 +0000 UTC m=+3437.325520699" Feb 28 05:30:52 crc kubenswrapper[5014]: I0228 05:30:52.690514 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ttz7x" Feb 28 05:30:52 crc kubenswrapper[5014]: I0228 05:30:52.691126 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ttz7x" Feb 28 05:30:52 crc kubenswrapper[5014]: I0228 05:30:52.743349 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ttz7x" Feb 28 05:30:53 crc kubenswrapper[5014]: I0228 05:30:53.733400 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ttz7x" Feb 28 05:30:53 crc kubenswrapper[5014]: I0228 05:30:53.787911 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ttz7x"] Feb 28 05:30:55 crc kubenswrapper[5014]: I0228 05:30:55.692873 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ttz7x" podUID="14625278-d6fd-40ef-8090-65feb5e2306a" containerName="registry-server" containerID="cri-o://4369ba3d30792087f5465daef977245740c35af8b7a8e7fb82a1131526b7ffef" gracePeriod=2 Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.274828 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttz7x" Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.368047 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14625278-d6fd-40ef-8090-65feb5e2306a-catalog-content\") pod \"14625278-d6fd-40ef-8090-65feb5e2306a\" (UID: \"14625278-d6fd-40ef-8090-65feb5e2306a\") " Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.368090 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14625278-d6fd-40ef-8090-65feb5e2306a-utilities\") pod \"14625278-d6fd-40ef-8090-65feb5e2306a\" (UID: \"14625278-d6fd-40ef-8090-65feb5e2306a\") " Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.368174 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgljx\" (UniqueName: \"kubernetes.io/projected/14625278-d6fd-40ef-8090-65feb5e2306a-kube-api-access-bgljx\") pod \"14625278-d6fd-40ef-8090-65feb5e2306a\" (UID: \"14625278-d6fd-40ef-8090-65feb5e2306a\") " Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.368985 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14625278-d6fd-40ef-8090-65feb5e2306a-utilities" (OuterVolumeSpecName: "utilities") pod "14625278-d6fd-40ef-8090-65feb5e2306a" (UID: "14625278-d6fd-40ef-8090-65feb5e2306a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.392610 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14625278-d6fd-40ef-8090-65feb5e2306a-kube-api-access-bgljx" (OuterVolumeSpecName: "kube-api-access-bgljx") pod "14625278-d6fd-40ef-8090-65feb5e2306a" (UID: "14625278-d6fd-40ef-8090-65feb5e2306a"). InnerVolumeSpecName "kube-api-access-bgljx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.435183 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14625278-d6fd-40ef-8090-65feb5e2306a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14625278-d6fd-40ef-8090-65feb5e2306a" (UID: "14625278-d6fd-40ef-8090-65feb5e2306a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.470691 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14625278-d6fd-40ef-8090-65feb5e2306a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.470723 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14625278-d6fd-40ef-8090-65feb5e2306a-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.470735 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgljx\" (UniqueName: \"kubernetes.io/projected/14625278-d6fd-40ef-8090-65feb5e2306a-kube-api-access-bgljx\") on node \"crc\" DevicePath \"\"" Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.708254 5014 generic.go:334] "Generic (PLEG): container finished" podID="14625278-d6fd-40ef-8090-65feb5e2306a" containerID="4369ba3d30792087f5465daef977245740c35af8b7a8e7fb82a1131526b7ffef" exitCode=0 Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.708346 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttz7x" event={"ID":"14625278-d6fd-40ef-8090-65feb5e2306a","Type":"ContainerDied","Data":"4369ba3d30792087f5465daef977245740c35af8b7a8e7fb82a1131526b7ffef"} Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.708631 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ttz7x" event={"ID":"14625278-d6fd-40ef-8090-65feb5e2306a","Type":"ContainerDied","Data":"4e4baa2147f5291d7e26b47cb440fe13350505dfe7649a817a4083f9bd42ad1a"} Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.708403 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ttz7x" Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.708656 5014 scope.go:117] "RemoveContainer" containerID="4369ba3d30792087f5465daef977245740c35af8b7a8e7fb82a1131526b7ffef" Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.735795 5014 scope.go:117] "RemoveContainer" containerID="15381d52c5dcc10383fe3d876c70a0f1aa1e19f65884e3c6c4e6dd52ff42f243" Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.759533 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ttz7x"] Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.772891 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ttz7x"] Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.781659 5014 scope.go:117] "RemoveContainer" containerID="0c025eef977fc2c7b591b5147e40acdeb36aad1fd05371149184d9fcf036b86a" Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.832593 5014 scope.go:117] "RemoveContainer" containerID="4369ba3d30792087f5465daef977245740c35af8b7a8e7fb82a1131526b7ffef" Feb 28 05:30:56 crc kubenswrapper[5014]: E0228 05:30:56.833628 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4369ba3d30792087f5465daef977245740c35af8b7a8e7fb82a1131526b7ffef\": container with ID starting with 4369ba3d30792087f5465daef977245740c35af8b7a8e7fb82a1131526b7ffef not found: ID does not exist" containerID="4369ba3d30792087f5465daef977245740c35af8b7a8e7fb82a1131526b7ffef" Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.833657 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4369ba3d30792087f5465daef977245740c35af8b7a8e7fb82a1131526b7ffef"} err="failed to get container status \"4369ba3d30792087f5465daef977245740c35af8b7a8e7fb82a1131526b7ffef\": rpc error: code = NotFound desc = could not find container \"4369ba3d30792087f5465daef977245740c35af8b7a8e7fb82a1131526b7ffef\": container with ID starting with 4369ba3d30792087f5465daef977245740c35af8b7a8e7fb82a1131526b7ffef not found: ID does not exist" Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.833676 5014 scope.go:117] "RemoveContainer" containerID="15381d52c5dcc10383fe3d876c70a0f1aa1e19f65884e3c6c4e6dd52ff42f243" Feb 28 05:30:56 crc kubenswrapper[5014]: E0228 05:30:56.834516 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"15381d52c5dcc10383fe3d876c70a0f1aa1e19f65884e3c6c4e6dd52ff42f243\": container with ID starting with 15381d52c5dcc10383fe3d876c70a0f1aa1e19f65884e3c6c4e6dd52ff42f243 not found: ID does not exist" containerID="15381d52c5dcc10383fe3d876c70a0f1aa1e19f65884e3c6c4e6dd52ff42f243" Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.834542 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"15381d52c5dcc10383fe3d876c70a0f1aa1e19f65884e3c6c4e6dd52ff42f243"} err="failed to get container status \"15381d52c5dcc10383fe3d876c70a0f1aa1e19f65884e3c6c4e6dd52ff42f243\": rpc error: code = NotFound desc = could not find container \"15381d52c5dcc10383fe3d876c70a0f1aa1e19f65884e3c6c4e6dd52ff42f243\": container with ID starting with 15381d52c5dcc10383fe3d876c70a0f1aa1e19f65884e3c6c4e6dd52ff42f243 not found: ID does not exist" Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.834555 5014 scope.go:117] "RemoveContainer" containerID="0c025eef977fc2c7b591b5147e40acdeb36aad1fd05371149184d9fcf036b86a" Feb 28 05:30:56 crc kubenswrapper[5014]: E0228 05:30:56.835054 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c025eef977fc2c7b591b5147e40acdeb36aad1fd05371149184d9fcf036b86a\": container with ID starting with 0c025eef977fc2c7b591b5147e40acdeb36aad1fd05371149184d9fcf036b86a not found: ID does not exist" containerID="0c025eef977fc2c7b591b5147e40acdeb36aad1fd05371149184d9fcf036b86a" Feb 28 05:30:56 crc kubenswrapper[5014]: I0228 05:30:56.835076 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c025eef977fc2c7b591b5147e40acdeb36aad1fd05371149184d9fcf036b86a"} err="failed to get container status \"0c025eef977fc2c7b591b5147e40acdeb36aad1fd05371149184d9fcf036b86a\": rpc error: code = NotFound desc = could not find container \"0c025eef977fc2c7b591b5147e40acdeb36aad1fd05371149184d9fcf036b86a\": container with ID starting with 0c025eef977fc2c7b591b5147e40acdeb36aad1fd05371149184d9fcf036b86a not found: ID does not exist" Feb 28 05:30:58 crc kubenswrapper[5014]: I0228 05:30:58.193598 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14625278-d6fd-40ef-8090-65feb5e2306a" path="/var/lib/kubelet/pods/14625278-d6fd-40ef-8090-65feb5e2306a/volumes" Feb 28 05:30:58 crc kubenswrapper[5014]: I0228 05:30:58.737969 5014 generic.go:334] "Generic (PLEG): container finished" podID="f34c4226-dd3b-468f-87ce-9ba9a552f815" containerID="7c6d02d2200cbcb3eef548d6d0dde8cd3d896e212e368522b145cd2942bb7e7b" exitCode=0 Feb 28 05:30:58 crc kubenswrapper[5014]: I0228 05:30:58.738107 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8292t/crc-debug-n8gpp" event={"ID":"f34c4226-dd3b-468f-87ce-9ba9a552f815","Type":"ContainerDied","Data":"7c6d02d2200cbcb3eef548d6d0dde8cd3d896e212e368522b145cd2942bb7e7b"} Feb 28 05:30:59 crc kubenswrapper[5014]: I0228 05:30:59.122967 5014 scope.go:117] "RemoveContainer" containerID="c2e1b73fc9b75769ddaee936d2429553e86351d655cea60bd40cf131b88e14f6" Feb 28 05:30:59 crc kubenswrapper[5014]: I0228 05:30:59.170563 5014 scope.go:117] "RemoveContainer" containerID="4bc77d39841b4bc81790985dd4b80ea6c4b0cb61aa9d1de599360a28ca2e1ad2" Feb 28 05:30:59 crc kubenswrapper[5014]: I0228 05:30:59.870104 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8292t/crc-debug-n8gpp" Feb 28 05:30:59 crc kubenswrapper[5014]: I0228 05:30:59.910286 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8292t/crc-debug-n8gpp"] Feb 28 05:30:59 crc kubenswrapper[5014]: I0228 05:30:59.923543 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8292t/crc-debug-n8gpp"] Feb 28 05:31:00 crc kubenswrapper[5014]: I0228 05:31:00.053143 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md25f\" (UniqueName: \"kubernetes.io/projected/f34c4226-dd3b-468f-87ce-9ba9a552f815-kube-api-access-md25f\") pod \"f34c4226-dd3b-468f-87ce-9ba9a552f815\" (UID: \"f34c4226-dd3b-468f-87ce-9ba9a552f815\") " Feb 28 05:31:00 crc kubenswrapper[5014]: I0228 05:31:00.053204 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f34c4226-dd3b-468f-87ce-9ba9a552f815-host\") pod \"f34c4226-dd3b-468f-87ce-9ba9a552f815\" (UID: \"f34c4226-dd3b-468f-87ce-9ba9a552f815\") " Feb 28 05:31:00 crc kubenswrapper[5014]: I0228 05:31:00.053412 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f34c4226-dd3b-468f-87ce-9ba9a552f815-host" (OuterVolumeSpecName: "host") pod "f34c4226-dd3b-468f-87ce-9ba9a552f815" (UID: "f34c4226-dd3b-468f-87ce-9ba9a552f815"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 05:31:00 crc kubenswrapper[5014]: I0228 05:31:00.053782 5014 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f34c4226-dd3b-468f-87ce-9ba9a552f815-host\") on node \"crc\" DevicePath \"\"" Feb 28 05:31:00 crc kubenswrapper[5014]: I0228 05:31:00.058309 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f34c4226-dd3b-468f-87ce-9ba9a552f815-kube-api-access-md25f" (OuterVolumeSpecName: "kube-api-access-md25f") pod "f34c4226-dd3b-468f-87ce-9ba9a552f815" (UID: "f34c4226-dd3b-468f-87ce-9ba9a552f815"). InnerVolumeSpecName "kube-api-access-md25f". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:31:00 crc kubenswrapper[5014]: I0228 05:31:00.155317 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md25f\" (UniqueName: \"kubernetes.io/projected/f34c4226-dd3b-468f-87ce-9ba9a552f815-kube-api-access-md25f\") on node \"crc\" DevicePath \"\"" Feb 28 05:31:00 crc kubenswrapper[5014]: I0228 05:31:00.184185 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f34c4226-dd3b-468f-87ce-9ba9a552f815" path="/var/lib/kubelet/pods/f34c4226-dd3b-468f-87ce-9ba9a552f815/volumes" Feb 28 05:31:00 crc kubenswrapper[5014]: I0228 05:31:00.759684 5014 scope.go:117] "RemoveContainer" containerID="7c6d02d2200cbcb3eef548d6d0dde8cd3d896e212e368522b145cd2942bb7e7b" Feb 28 05:31:00 crc kubenswrapper[5014]: I0228 05:31:00.759715 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8292t/crc-debug-n8gpp" Feb 28 05:31:01 crc kubenswrapper[5014]: I0228 05:31:01.090841 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8292t/crc-debug-mhcq5"] Feb 28 05:31:01 crc kubenswrapper[5014]: E0228 05:31:01.091424 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f34c4226-dd3b-468f-87ce-9ba9a552f815" containerName="container-00" Feb 28 05:31:01 crc kubenswrapper[5014]: I0228 05:31:01.091436 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f34c4226-dd3b-468f-87ce-9ba9a552f815" containerName="container-00" Feb 28 05:31:01 crc kubenswrapper[5014]: E0228 05:31:01.091451 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14625278-d6fd-40ef-8090-65feb5e2306a" containerName="registry-server" Feb 28 05:31:01 crc kubenswrapper[5014]: I0228 05:31:01.091457 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="14625278-d6fd-40ef-8090-65feb5e2306a" containerName="registry-server" Feb 28 05:31:01 crc kubenswrapper[5014]: E0228 05:31:01.091473 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14625278-d6fd-40ef-8090-65feb5e2306a" containerName="extract-utilities" Feb 28 05:31:01 crc kubenswrapper[5014]: I0228 05:31:01.091480 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="14625278-d6fd-40ef-8090-65feb5e2306a" containerName="extract-utilities" Feb 28 05:31:01 crc kubenswrapper[5014]: E0228 05:31:01.091487 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14625278-d6fd-40ef-8090-65feb5e2306a" containerName="extract-content" Feb 28 05:31:01 crc kubenswrapper[5014]: I0228 05:31:01.091492 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="14625278-d6fd-40ef-8090-65feb5e2306a" containerName="extract-content" Feb 28 05:31:01 crc kubenswrapper[5014]: I0228 05:31:01.091661 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="14625278-d6fd-40ef-8090-65feb5e2306a" containerName="registry-server" Feb 28 05:31:01 crc kubenswrapper[5014]: I0228 05:31:01.091675 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="f34c4226-dd3b-468f-87ce-9ba9a552f815" containerName="container-00" Feb 28 05:31:01 crc kubenswrapper[5014]: I0228 05:31:01.092313 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8292t/crc-debug-mhcq5" Feb 28 05:31:01 crc kubenswrapper[5014]: I0228 05:31:01.093887 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-8292t"/"default-dockercfg-8hmf2" Feb 28 05:31:01 crc kubenswrapper[5014]: I0228 05:31:01.274966 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8dc4a42d-6002-4b9c-8e54-6cecd95f60c6-host\") pod \"crc-debug-mhcq5\" (UID: \"8dc4a42d-6002-4b9c-8e54-6cecd95f60c6\") " pod="openshift-must-gather-8292t/crc-debug-mhcq5" Feb 28 05:31:01 crc kubenswrapper[5014]: I0228 05:31:01.275019 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-777gs\" (UniqueName: \"kubernetes.io/projected/8dc4a42d-6002-4b9c-8e54-6cecd95f60c6-kube-api-access-777gs\") pod \"crc-debug-mhcq5\" (UID: \"8dc4a42d-6002-4b9c-8e54-6cecd95f60c6\") " pod="openshift-must-gather-8292t/crc-debug-mhcq5" Feb 28 05:31:01 crc kubenswrapper[5014]: I0228 05:31:01.377227 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8dc4a42d-6002-4b9c-8e54-6cecd95f60c6-host\") pod \"crc-debug-mhcq5\" (UID: \"8dc4a42d-6002-4b9c-8e54-6cecd95f60c6\") " pod="openshift-must-gather-8292t/crc-debug-mhcq5" Feb 28 05:31:01 crc kubenswrapper[5014]: I0228 05:31:01.377276 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-777gs\" (UniqueName: \"kubernetes.io/projected/8dc4a42d-6002-4b9c-8e54-6cecd95f60c6-kube-api-access-777gs\") pod \"crc-debug-mhcq5\" (UID: \"8dc4a42d-6002-4b9c-8e54-6cecd95f60c6\") " pod="openshift-must-gather-8292t/crc-debug-mhcq5" Feb 28 05:31:01 crc kubenswrapper[5014]: I0228 05:31:01.377462 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8dc4a42d-6002-4b9c-8e54-6cecd95f60c6-host\") pod \"crc-debug-mhcq5\" (UID: \"8dc4a42d-6002-4b9c-8e54-6cecd95f60c6\") " pod="openshift-must-gather-8292t/crc-debug-mhcq5" Feb 28 05:31:01 crc kubenswrapper[5014]: I0228 05:31:01.398024 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-777gs\" (UniqueName: \"kubernetes.io/projected/8dc4a42d-6002-4b9c-8e54-6cecd95f60c6-kube-api-access-777gs\") pod \"crc-debug-mhcq5\" (UID: \"8dc4a42d-6002-4b9c-8e54-6cecd95f60c6\") " pod="openshift-must-gather-8292t/crc-debug-mhcq5" Feb 28 05:31:01 crc kubenswrapper[5014]: I0228 05:31:01.408107 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8292t/crc-debug-mhcq5" Feb 28 05:31:01 crc kubenswrapper[5014]: I0228 05:31:01.768662 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8292t/crc-debug-mhcq5" event={"ID":"8dc4a42d-6002-4b9c-8e54-6cecd95f60c6","Type":"ContainerStarted","Data":"f46542e9b1091b83d9791bcadb9b2896021c9e93183970497d0c509885e7cc05"} Feb 28 05:31:01 crc kubenswrapper[5014]: I0228 05:31:01.769064 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8292t/crc-debug-mhcq5" event={"ID":"8dc4a42d-6002-4b9c-8e54-6cecd95f60c6","Type":"ContainerStarted","Data":"8ddb67dc36fd3945078de3f984d1e523508c7fba6158e4ae6093667650673b7c"} Feb 28 05:31:01 crc kubenswrapper[5014]: I0228 05:31:01.788073 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8292t/crc-debug-mhcq5" podStartSLOduration=0.78805511 podStartE2EDuration="788.05511ms" podCreationTimestamp="2026-02-28 05:31:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 05:31:01.78441453 +0000 UTC m=+3450.454540440" watchObservedRunningTime="2026-02-28 05:31:01.78805511 +0000 UTC m=+3450.458181020" Feb 28 05:31:02 crc kubenswrapper[5014]: I0228 05:31:02.778075 5014 generic.go:334] "Generic (PLEG): container finished" podID="8dc4a42d-6002-4b9c-8e54-6cecd95f60c6" containerID="f46542e9b1091b83d9791bcadb9b2896021c9e93183970497d0c509885e7cc05" exitCode=0 Feb 28 05:31:02 crc kubenswrapper[5014]: I0228 05:31:02.778127 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8292t/crc-debug-mhcq5" event={"ID":"8dc4a42d-6002-4b9c-8e54-6cecd95f60c6","Type":"ContainerDied","Data":"f46542e9b1091b83d9791bcadb9b2896021c9e93183970497d0c509885e7cc05"} Feb 28 05:31:03 crc kubenswrapper[5014]: I0228 05:31:03.902094 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8292t/crc-debug-mhcq5" Feb 28 05:31:03 crc kubenswrapper[5014]: I0228 05:31:03.936212 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8292t/crc-debug-mhcq5"] Feb 28 05:31:03 crc kubenswrapper[5014]: I0228 05:31:03.944334 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8292t/crc-debug-mhcq5"] Feb 28 05:31:04 crc kubenswrapper[5014]: I0228 05:31:04.027095 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-777gs\" (UniqueName: \"kubernetes.io/projected/8dc4a42d-6002-4b9c-8e54-6cecd95f60c6-kube-api-access-777gs\") pod \"8dc4a42d-6002-4b9c-8e54-6cecd95f60c6\" (UID: \"8dc4a42d-6002-4b9c-8e54-6cecd95f60c6\") " Feb 28 05:31:04 crc kubenswrapper[5014]: I0228 05:31:04.027334 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8dc4a42d-6002-4b9c-8e54-6cecd95f60c6-host\") pod \"8dc4a42d-6002-4b9c-8e54-6cecd95f60c6\" (UID: \"8dc4a42d-6002-4b9c-8e54-6cecd95f60c6\") " Feb 28 05:31:04 crc kubenswrapper[5014]: I0228 05:31:04.027387 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8dc4a42d-6002-4b9c-8e54-6cecd95f60c6-host" (OuterVolumeSpecName: "host") pod "8dc4a42d-6002-4b9c-8e54-6cecd95f60c6" (UID: "8dc4a42d-6002-4b9c-8e54-6cecd95f60c6"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 05:31:04 crc kubenswrapper[5014]: I0228 05:31:04.027928 5014 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/8dc4a42d-6002-4b9c-8e54-6cecd95f60c6-host\") on node \"crc\" DevicePath \"\"" Feb 28 05:31:04 crc kubenswrapper[5014]: I0228 05:31:04.048844 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dc4a42d-6002-4b9c-8e54-6cecd95f60c6-kube-api-access-777gs" (OuterVolumeSpecName: "kube-api-access-777gs") pod "8dc4a42d-6002-4b9c-8e54-6cecd95f60c6" (UID: "8dc4a42d-6002-4b9c-8e54-6cecd95f60c6"). InnerVolumeSpecName "kube-api-access-777gs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:31:04 crc kubenswrapper[5014]: I0228 05:31:04.129851 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-777gs\" (UniqueName: \"kubernetes.io/projected/8dc4a42d-6002-4b9c-8e54-6cecd95f60c6-kube-api-access-777gs\") on node \"crc\" DevicePath \"\"" Feb 28 05:31:04 crc kubenswrapper[5014]: I0228 05:31:04.184564 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dc4a42d-6002-4b9c-8e54-6cecd95f60c6" path="/var/lib/kubelet/pods/8dc4a42d-6002-4b9c-8e54-6cecd95f60c6/volumes" Feb 28 05:31:04 crc kubenswrapper[5014]: I0228 05:31:04.818394 5014 scope.go:117] "RemoveContainer" containerID="f46542e9b1091b83d9791bcadb9b2896021c9e93183970497d0c509885e7cc05" Feb 28 05:31:04 crc kubenswrapper[5014]: I0228 05:31:04.818443 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8292t/crc-debug-mhcq5" Feb 28 05:31:05 crc kubenswrapper[5014]: I0228 05:31:05.111200 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8292t/crc-debug-fqzvn"] Feb 28 05:31:05 crc kubenswrapper[5014]: E0228 05:31:05.111823 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8dc4a42d-6002-4b9c-8e54-6cecd95f60c6" containerName="container-00" Feb 28 05:31:05 crc kubenswrapper[5014]: I0228 05:31:05.111841 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="8dc4a42d-6002-4b9c-8e54-6cecd95f60c6" containerName="container-00" Feb 28 05:31:05 crc kubenswrapper[5014]: I0228 05:31:05.112067 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="8dc4a42d-6002-4b9c-8e54-6cecd95f60c6" containerName="container-00" Feb 28 05:31:05 crc kubenswrapper[5014]: I0228 05:31:05.112778 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8292t/crc-debug-fqzvn" Feb 28 05:31:05 crc kubenswrapper[5014]: I0228 05:31:05.114737 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-8292t"/"default-dockercfg-8hmf2" Feb 28 05:31:05 crc kubenswrapper[5014]: I0228 05:31:05.250631 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/582e47a4-0e8b-4491-91f8-5f9b33b3023b-host\") pod \"crc-debug-fqzvn\" (UID: \"582e47a4-0e8b-4491-91f8-5f9b33b3023b\") " pod="openshift-must-gather-8292t/crc-debug-fqzvn" Feb 28 05:31:05 crc kubenswrapper[5014]: I0228 05:31:05.250727 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5w25\" (UniqueName: \"kubernetes.io/projected/582e47a4-0e8b-4491-91f8-5f9b33b3023b-kube-api-access-q5w25\") pod \"crc-debug-fqzvn\" (UID: \"582e47a4-0e8b-4491-91f8-5f9b33b3023b\") " pod="openshift-must-gather-8292t/crc-debug-fqzvn" Feb 28 05:31:05 crc kubenswrapper[5014]: I0228 05:31:05.351978 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/582e47a4-0e8b-4491-91f8-5f9b33b3023b-host\") pod \"crc-debug-fqzvn\" (UID: \"582e47a4-0e8b-4491-91f8-5f9b33b3023b\") " pod="openshift-must-gather-8292t/crc-debug-fqzvn" Feb 28 05:31:05 crc kubenswrapper[5014]: I0228 05:31:05.352307 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5w25\" (UniqueName: \"kubernetes.io/projected/582e47a4-0e8b-4491-91f8-5f9b33b3023b-kube-api-access-q5w25\") pod \"crc-debug-fqzvn\" (UID: \"582e47a4-0e8b-4491-91f8-5f9b33b3023b\") " pod="openshift-must-gather-8292t/crc-debug-fqzvn" Feb 28 05:31:05 crc kubenswrapper[5014]: I0228 05:31:05.352157 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/582e47a4-0e8b-4491-91f8-5f9b33b3023b-host\") pod \"crc-debug-fqzvn\" (UID: \"582e47a4-0e8b-4491-91f8-5f9b33b3023b\") " pod="openshift-must-gather-8292t/crc-debug-fqzvn" Feb 28 05:31:05 crc kubenswrapper[5014]: I0228 05:31:05.370761 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5w25\" (UniqueName: \"kubernetes.io/projected/582e47a4-0e8b-4491-91f8-5f9b33b3023b-kube-api-access-q5w25\") pod \"crc-debug-fqzvn\" (UID: \"582e47a4-0e8b-4491-91f8-5f9b33b3023b\") " pod="openshift-must-gather-8292t/crc-debug-fqzvn" Feb 28 05:31:05 crc kubenswrapper[5014]: I0228 05:31:05.429730 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8292t/crc-debug-fqzvn" Feb 28 05:31:05 crc kubenswrapper[5014]: W0228 05:31:05.458524 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod582e47a4_0e8b_4491_91f8_5f9b33b3023b.slice/crio-f951d90be6feaafe5fea0daf88893e69f12eda6c3c4b77dfc5b31f063730bdab WatchSource:0}: Error finding container f951d90be6feaafe5fea0daf88893e69f12eda6c3c4b77dfc5b31f063730bdab: Status 404 returned error can't find the container with id f951d90be6feaafe5fea0daf88893e69f12eda6c3c4b77dfc5b31f063730bdab Feb 28 05:31:05 crc kubenswrapper[5014]: I0228 05:31:05.830219 5014 generic.go:334] "Generic (PLEG): container finished" podID="582e47a4-0e8b-4491-91f8-5f9b33b3023b" containerID="1b0198979cb08e3e72141e787d5b16517716f650db739940bc326c96395fc3a5" exitCode=0 Feb 28 05:31:05 crc kubenswrapper[5014]: I0228 05:31:05.830249 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8292t/crc-debug-fqzvn" event={"ID":"582e47a4-0e8b-4491-91f8-5f9b33b3023b","Type":"ContainerDied","Data":"1b0198979cb08e3e72141e787d5b16517716f650db739940bc326c96395fc3a5"} Feb 28 05:31:05 crc kubenswrapper[5014]: I0228 05:31:05.830611 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8292t/crc-debug-fqzvn" event={"ID":"582e47a4-0e8b-4491-91f8-5f9b33b3023b","Type":"ContainerStarted","Data":"f951d90be6feaafe5fea0daf88893e69f12eda6c3c4b77dfc5b31f063730bdab"} Feb 28 05:31:05 crc kubenswrapper[5014]: I0228 05:31:05.876057 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8292t/crc-debug-fqzvn"] Feb 28 05:31:05 crc kubenswrapper[5014]: I0228 05:31:05.891776 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8292t/crc-debug-fqzvn"] Feb 28 05:31:06 crc kubenswrapper[5014]: I0228 05:31:06.982913 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8292t/crc-debug-fqzvn" Feb 28 05:31:07 crc kubenswrapper[5014]: I0228 05:31:07.083706 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/582e47a4-0e8b-4491-91f8-5f9b33b3023b-host\") pod \"582e47a4-0e8b-4491-91f8-5f9b33b3023b\" (UID: \"582e47a4-0e8b-4491-91f8-5f9b33b3023b\") " Feb 28 05:31:07 crc kubenswrapper[5014]: I0228 05:31:07.083856 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5w25\" (UniqueName: \"kubernetes.io/projected/582e47a4-0e8b-4491-91f8-5f9b33b3023b-kube-api-access-q5w25\") pod \"582e47a4-0e8b-4491-91f8-5f9b33b3023b\" (UID: \"582e47a4-0e8b-4491-91f8-5f9b33b3023b\") " Feb 28 05:31:07 crc kubenswrapper[5014]: I0228 05:31:07.083856 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/582e47a4-0e8b-4491-91f8-5f9b33b3023b-host" (OuterVolumeSpecName: "host") pod "582e47a4-0e8b-4491-91f8-5f9b33b3023b" (UID: "582e47a4-0e8b-4491-91f8-5f9b33b3023b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 05:31:07 crc kubenswrapper[5014]: I0228 05:31:07.084245 5014 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/582e47a4-0e8b-4491-91f8-5f9b33b3023b-host\") on node \"crc\" DevicePath \"\"" Feb 28 05:31:07 crc kubenswrapper[5014]: I0228 05:31:07.089667 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/582e47a4-0e8b-4491-91f8-5f9b33b3023b-kube-api-access-q5w25" (OuterVolumeSpecName: "kube-api-access-q5w25") pod "582e47a4-0e8b-4491-91f8-5f9b33b3023b" (UID: "582e47a4-0e8b-4491-91f8-5f9b33b3023b"). InnerVolumeSpecName "kube-api-access-q5w25". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:31:07 crc kubenswrapper[5014]: I0228 05:31:07.186180 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5w25\" (UniqueName: \"kubernetes.io/projected/582e47a4-0e8b-4491-91f8-5f9b33b3023b-kube-api-access-q5w25\") on node \"crc\" DevicePath \"\"" Feb 28 05:31:07 crc kubenswrapper[5014]: I0228 05:31:07.852497 5014 scope.go:117] "RemoveContainer" containerID="1b0198979cb08e3e72141e787d5b16517716f650db739940bc326c96395fc3a5" Feb 28 05:31:07 crc kubenswrapper[5014]: I0228 05:31:07.852547 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8292t/crc-debug-fqzvn" Feb 28 05:31:08 crc kubenswrapper[5014]: I0228 05:31:08.183861 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="582e47a4-0e8b-4491-91f8-5f9b33b3023b" path="/var/lib/kubelet/pods/582e47a4-0e8b-4491-91f8-5f9b33b3023b/volumes" Feb 28 05:31:21 crc kubenswrapper[5014]: I0228 05:31:21.921343 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-85ff55b8dd-q46np_0c857c36-d78c-484b-a0b1-1cabf11c32a3/barbican-api/0.log" Feb 28 05:31:22 crc kubenswrapper[5014]: I0228 05:31:22.105532 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-85ff55b8dd-q46np_0c857c36-d78c-484b-a0b1-1cabf11c32a3/barbican-api-log/0.log" Feb 28 05:31:22 crc kubenswrapper[5014]: I0228 05:31:22.210133 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7dd8f4645d-ckwth_bd8db062-b379-402e-a83b-291ee7e55bf1/barbican-keystone-listener/0.log" Feb 28 05:31:22 crc kubenswrapper[5014]: I0228 05:31:22.213012 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7dd8f4645d-ckwth_bd8db062-b379-402e-a83b-291ee7e55bf1/barbican-keystone-listener-log/0.log" Feb 28 05:31:22 crc kubenswrapper[5014]: I0228 05:31:22.292975 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-76c688b599-br8wc_45715325-beb1-4639-bb3c-d466fc6e85ce/barbican-worker/0.log" Feb 28 05:31:22 crc kubenswrapper[5014]: I0228 05:31:22.393993 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-76c688b599-br8wc_45715325-beb1-4639-bb3c-d466fc6e85ce/barbican-worker-log/0.log" Feb 28 05:31:22 crc kubenswrapper[5014]: I0228 05:31:22.479609 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv_71fc0e19-253e-4cae-b6ee-7efc24398ffa/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:31:22 crc kubenswrapper[5014]: I0228 05:31:22.590999 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_522b8e6d-5531-4436-9c64-fadde40a77df/ceilometer-central-agent/0.log" Feb 28 05:31:22 crc kubenswrapper[5014]: I0228 05:31:22.711724 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_522b8e6d-5531-4436-9c64-fadde40a77df/ceilometer-notification-agent/0.log" Feb 28 05:31:22 crc kubenswrapper[5014]: I0228 05:31:22.765186 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_522b8e6d-5531-4436-9c64-fadde40a77df/proxy-httpd/0.log" Feb 28 05:31:22 crc kubenswrapper[5014]: I0228 05:31:22.796496 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_522b8e6d-5531-4436-9c64-fadde40a77df/sg-core/0.log" Feb 28 05:31:22 crc kubenswrapper[5014]: I0228 05:31:22.914284 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_89500e11-205d-40a6-ba7b-54b76ec65b69/cinder-api-log/0.log" Feb 28 05:31:22 crc kubenswrapper[5014]: I0228 05:31:22.966650 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_89500e11-205d-40a6-ba7b-54b76ec65b69/cinder-api/0.log" Feb 28 05:31:23 crc kubenswrapper[5014]: I0228 05:31:23.067035 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_29a28811-7002-4b5e-a6d7-8c204bc306db/cinder-scheduler/0.log" Feb 28 05:31:23 crc kubenswrapper[5014]: I0228 05:31:23.215788 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_29a28811-7002-4b5e-a6d7-8c204bc306db/probe/0.log" Feb 28 05:31:23 crc kubenswrapper[5014]: I0228 05:31:23.308791 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9_169069f4-d382-4045-99a5-cf54af88ee18/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:31:23 crc kubenswrapper[5014]: I0228 05:31:23.424358 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-7cc87_2cf2a283-e04c-4b99-978c-8e8261227a09/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:31:23 crc kubenswrapper[5014]: I0228 05:31:23.484578 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-zxf77_67c40633-8133-430b-8528-2aab67995b17/init/0.log" Feb 28 05:31:23 crc kubenswrapper[5014]: I0228 05:31:23.703851 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-zxf77_67c40633-8133-430b-8528-2aab67995b17/init/0.log" Feb 28 05:31:23 crc kubenswrapper[5014]: I0228 05:31:23.734974 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-zxf77_67c40633-8133-430b-8528-2aab67995b17/dnsmasq-dns/0.log" Feb 28 05:31:23 crc kubenswrapper[5014]: I0228 05:31:23.799470 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-fq52b_92c43e33-7947-4ad2-984a-e2618b76f368/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:31:23 crc kubenswrapper[5014]: I0228 05:31:23.910995 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f2c655a1-25af-4c06-9799-01a3a9fd5e52/glance-log/0.log" Feb 28 05:31:23 crc kubenswrapper[5014]: I0228 05:31:23.971919 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f2c655a1-25af-4c06-9799-01a3a9fd5e52/glance-httpd/0.log" Feb 28 05:31:24 crc kubenswrapper[5014]: I0228 05:31:24.152311 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b75610f5-509e-4ffa-a5fe-0eaa0dbcce98/glance-log/0.log" Feb 28 05:31:24 crc kubenswrapper[5014]: I0228 05:31:24.191257 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b75610f5-509e-4ffa-a5fe-0eaa0dbcce98/glance-httpd/0.log" Feb 28 05:31:24 crc kubenswrapper[5014]: I0228 05:31:24.439838 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b_fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:31:24 crc kubenswrapper[5014]: I0228 05:31:24.440817 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-c9c88866d-6m8lj_6ee56420-1b4d-4898-97db-d05756b9bb72/horizon/0.log" Feb 28 05:31:24 crc kubenswrapper[5014]: I0228 05:31:24.614616 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-c9c88866d-6m8lj_6ee56420-1b4d-4898-97db-d05756b9bb72/horizon-log/0.log" Feb 28 05:31:24 crc kubenswrapper[5014]: I0228 05:31:24.710720 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-qvmfl_2ff06abc-551c-452e-8593-603fb882db21/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:31:24 crc kubenswrapper[5014]: I0228 05:31:24.942336 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-799995d5cd-97xmn_2371f935-6c31-4088-ad79-e3dadd298f40/keystone-api/0.log" Feb 28 05:31:24 crc kubenswrapper[5014]: I0228 05:31:24.958305 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29537581-pvgzn_b2a11b02-95d9-48f6-bb32-afa554e2ec2e/keystone-cron/0.log" Feb 28 05:31:25 crc kubenswrapper[5014]: I0228 05:31:25.118627 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_020d4ca7-8d28-4954-a4a0-c031eb935a21/kube-state-metrics/0.log" Feb 28 05:31:25 crc kubenswrapper[5014]: I0228 05:31:25.165669 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6_85e8a1f1-6f8c-4af8-9273-dc37192bea6a/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:31:25 crc kubenswrapper[5014]: I0228 05:31:25.590200 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-58dcfcf9bc-4rtlk_1de3f60c-6e45-4b05-84eb-749e470d4595/neutron-httpd/0.log" Feb 28 05:31:25 crc kubenswrapper[5014]: I0228 05:31:25.605738 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-58dcfcf9bc-4rtlk_1de3f60c-6e45-4b05-84eb-749e470d4595/neutron-api/0.log" Feb 28 05:31:25 crc kubenswrapper[5014]: I0228 05:31:25.852777 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq_8746177b-a5ee-41d6-8d6c-94e7eae1082e/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:31:26 crc kubenswrapper[5014]: I0228 05:31:26.286579 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_e76b3d9a-ffbe-4d58-9264-1b4ca1528410/nova-cell0-conductor-conductor/0.log" Feb 28 05:31:26 crc kubenswrapper[5014]: I0228 05:31:26.420301 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_d4e1baa8-fe04-453a-8462-e7de1e98ba73/nova-api-log/0.log" Feb 28 05:31:26 crc kubenswrapper[5014]: I0228 05:31:26.470921 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_d4e1baa8-fe04-453a-8462-e7de1e98ba73/nova-api-api/0.log" Feb 28 05:31:26 crc kubenswrapper[5014]: I0228 05:31:26.602145 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_01377f7d-9edd-424c-b22e-42fde4e51e95/nova-cell1-conductor-conductor/0.log" Feb 28 05:31:26 crc kubenswrapper[5014]: I0228 05:31:26.677125 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_974c3323-4513-41b7-9c2e-7cb58d91d6f1/nova-cell1-novncproxy-novncproxy/0.log" Feb 28 05:31:26 crc kubenswrapper[5014]: I0228 05:31:26.811070 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-62n2s_b2cec974-8eb2-428d-8c59-97af37993f91/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:31:26 crc kubenswrapper[5014]: I0228 05:31:26.962318 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d354f3a0-5e09-438a-bb5d-385b2ab4857f/nova-metadata-log/0.log" Feb 28 05:31:27 crc kubenswrapper[5014]: I0228 05:31:27.281752 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_7b66aa07-e591-474f-b1f0-442147425299/nova-scheduler-scheduler/0.log" Feb 28 05:31:27 crc kubenswrapper[5014]: I0228 05:31:27.335023 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ac71caa8-2f63-4b64-8d37-a1b364b62158/mysql-bootstrap/0.log" Feb 28 05:31:27 crc kubenswrapper[5014]: I0228 05:31:27.488036 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ac71caa8-2f63-4b64-8d37-a1b364b62158/galera/0.log" Feb 28 05:31:27 crc kubenswrapper[5014]: I0228 05:31:27.546976 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ac71caa8-2f63-4b64-8d37-a1b364b62158/mysql-bootstrap/0.log" Feb 28 05:31:27 crc kubenswrapper[5014]: I0228 05:31:27.669842 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c1c70607-6183-4835-9ce6-fe3ef0d2b6fb/mysql-bootstrap/0.log" Feb 28 05:31:27 crc kubenswrapper[5014]: I0228 05:31:27.853506 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c1c70607-6183-4835-9ce6-fe3ef0d2b6fb/mysql-bootstrap/0.log" Feb 28 05:31:27 crc kubenswrapper[5014]: I0228 05:31:27.915876 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c1c70607-6183-4835-9ce6-fe3ef0d2b6fb/galera/0.log" Feb 28 05:31:28 crc kubenswrapper[5014]: I0228 05:31:28.035771 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d354f3a0-5e09-438a-bb5d-385b2ab4857f/nova-metadata-metadata/0.log" Feb 28 05:31:28 crc kubenswrapper[5014]: I0228 05:31:28.047445 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_dae41ad3-a997-4a4a-91ab-34175d98fb97/openstackclient/0.log" Feb 28 05:31:28 crc kubenswrapper[5014]: I0228 05:31:28.119509 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-9qps6_02ab5d98-13ab-483d-b32b-a509bedd8ded/ovn-controller/0.log" Feb 28 05:31:28 crc kubenswrapper[5014]: I0228 05:31:28.231183 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mgzdl_43eb6c14-8ca4-41ba-9ee2-7326edcab237/openstack-network-exporter/0.log" Feb 28 05:31:28 crc kubenswrapper[5014]: I0228 05:31:28.366064 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6vfgk_c3f16040-f11b-405c-b332-7ee5eabac2bd/ovsdb-server-init/0.log" Feb 28 05:31:28 crc kubenswrapper[5014]: I0228 05:31:28.585594 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6vfgk_c3f16040-f11b-405c-b332-7ee5eabac2bd/ovsdb-server/0.log" Feb 28 05:31:28 crc kubenswrapper[5014]: I0228 05:31:28.601644 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6vfgk_c3f16040-f11b-405c-b332-7ee5eabac2bd/ovsdb-server-init/0.log" Feb 28 05:31:28 crc kubenswrapper[5014]: I0228 05:31:28.666616 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6vfgk_c3f16040-f11b-405c-b332-7ee5eabac2bd/ovs-vswitchd/0.log" Feb 28 05:31:28 crc kubenswrapper[5014]: I0228 05:31:28.952849 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-gdrsz_ab8babaf-acb3-4c27-a8bd-abc56808e9d7/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:31:29 crc kubenswrapper[5014]: I0228 05:31:29.022085 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_22702874-a9ba-4491-aed2-5ef93384150c/ovn-northd/0.log" Feb 28 05:31:29 crc kubenswrapper[5014]: I0228 05:31:29.072936 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_22702874-a9ba-4491-aed2-5ef93384150c/openstack-network-exporter/0.log" Feb 28 05:31:29 crc kubenswrapper[5014]: I0228 05:31:29.157704 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5a44d0e3-2ba4-4d6f-924b-1f516c90a11f/openstack-network-exporter/0.log" Feb 28 05:31:29 crc kubenswrapper[5014]: I0228 05:31:29.257066 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5a44d0e3-2ba4-4d6f-924b-1f516c90a11f/ovsdbserver-nb/0.log" Feb 28 05:31:29 crc kubenswrapper[5014]: I0228 05:31:29.344354 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_569b1ad4-179c-4852-a5fc-509fe31df812/openstack-network-exporter/0.log" Feb 28 05:31:29 crc kubenswrapper[5014]: I0228 05:31:29.394176 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_569b1ad4-179c-4852-a5fc-509fe31df812/ovsdbserver-sb/0.log" Feb 28 05:31:29 crc kubenswrapper[5014]: I0228 05:31:29.671059 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5cd4874894-s6tz4_c690f68f-407a-4db7-a99c-67cfa5a5833b/placement-api/0.log" Feb 28 05:31:29 crc kubenswrapper[5014]: I0228 05:31:29.672531 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5cd4874894-s6tz4_c690f68f-407a-4db7-a99c-67cfa5a5833b/placement-log/0.log" Feb 28 05:31:29 crc kubenswrapper[5014]: I0228 05:31:29.706213 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3df93ff6-00cf-4c7f-8971-6d1d78795456/setup-container/0.log" Feb 28 05:31:29 crc kubenswrapper[5014]: I0228 05:31:29.876882 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3df93ff6-00cf-4c7f-8971-6d1d78795456/setup-container/0.log" Feb 28 05:31:29 crc kubenswrapper[5014]: I0228 05:31:29.905211 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3df93ff6-00cf-4c7f-8971-6d1d78795456/rabbitmq/0.log" Feb 28 05:31:29 crc kubenswrapper[5014]: I0228 05:31:29.979444 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7b0d0bd3-ff23-4098-93fb-debf7681cfce/setup-container/0.log" Feb 28 05:31:30 crc kubenswrapper[5014]: I0228 05:31:30.123080 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7b0d0bd3-ff23-4098-93fb-debf7681cfce/setup-container/0.log" Feb 28 05:31:30 crc kubenswrapper[5014]: I0228 05:31:30.145109 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7b0d0bd3-ff23-4098-93fb-debf7681cfce/rabbitmq/0.log" Feb 28 05:31:30 crc kubenswrapper[5014]: I0228 05:31:30.249844 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4_04a4501f-8652-4960-aa15-e083bf2c5b68/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:31:30 crc kubenswrapper[5014]: I0228 05:31:30.324732 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-86b55_64b99a72-222b-4ead-b368-fe335c674da5/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:31:30 crc kubenswrapper[5014]: I0228 05:31:30.499137 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj_01598708-a115-4ecd-a957-e78d6dbedfcb/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:31:30 crc kubenswrapper[5014]: I0228 05:31:30.529413 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-mbxvz_3d570627-429c-4a9c-a45a-55d652968c46/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:31:30 crc kubenswrapper[5014]: I0228 05:31:30.710661 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-shsr9_fd843e16-57f4-412b-aeec-d22b9609502f/ssh-known-hosts-edpm-deployment/0.log" Feb 28 05:31:30 crc kubenswrapper[5014]: I0228 05:31:30.888328 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6c68684b95-vvvhf_6d31e889-55bb-4dc4-b470-dcb11b4438a7/proxy-httpd/0.log" Feb 28 05:31:30 crc kubenswrapper[5014]: I0228 05:31:30.952034 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6c68684b95-vvvhf_6d31e889-55bb-4dc4-b470-dcb11b4438a7/proxy-server/0.log" Feb 28 05:31:31 crc kubenswrapper[5014]: I0228 05:31:31.008435 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-dn9mn_15c6e56b-a312-43c9-b627-af4138518fe4/swift-ring-rebalance/0.log" Feb 28 05:31:31 crc kubenswrapper[5014]: I0228 05:31:31.167159 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/account-auditor/0.log" Feb 28 05:31:31 crc kubenswrapper[5014]: I0228 05:31:31.188392 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/account-reaper/0.log" Feb 28 05:31:31 crc kubenswrapper[5014]: I0228 05:31:31.284026 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/account-replicator/0.log" Feb 28 05:31:31 crc kubenswrapper[5014]: I0228 05:31:31.371038 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/account-server/0.log" Feb 28 05:31:31 crc kubenswrapper[5014]: I0228 05:31:31.417732 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/container-auditor/0.log" Feb 28 05:31:31 crc kubenswrapper[5014]: I0228 05:31:31.452858 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/container-replicator/0.log" Feb 28 05:31:31 crc kubenswrapper[5014]: I0228 05:31:31.476760 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/container-server/0.log" Feb 28 05:31:31 crc kubenswrapper[5014]: I0228 05:31:31.583555 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/container-updater/0.log" Feb 28 05:31:31 crc kubenswrapper[5014]: I0228 05:31:31.640949 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/object-expirer/0.log" Feb 28 05:31:31 crc kubenswrapper[5014]: I0228 05:31:31.647158 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/object-auditor/0.log" Feb 28 05:31:31 crc kubenswrapper[5014]: I0228 05:31:31.729890 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/object-replicator/0.log" Feb 28 05:31:31 crc kubenswrapper[5014]: I0228 05:31:31.788475 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/object-server/0.log" Feb 28 05:31:31 crc kubenswrapper[5014]: I0228 05:31:31.832575 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/rsync/0.log" Feb 28 05:31:31 crc kubenswrapper[5014]: I0228 05:31:31.883574 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/object-updater/0.log" Feb 28 05:31:31 crc kubenswrapper[5014]: I0228 05:31:31.952684 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/swift-recon-cron/0.log" Feb 28 05:31:32 crc kubenswrapper[5014]: I0228 05:31:32.116148 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5_8bf54c30-88fb-46eb-8949-e2231e958201/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:31:32 crc kubenswrapper[5014]: I0228 05:31:32.161336 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_2db9b9b7-c55d-4b8b-b51b-cd081afed742/tempest-tests-tempest-tests-runner/0.log" Feb 28 05:31:32 crc kubenswrapper[5014]: I0228 05:31:32.346300 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_729e0ea7-49de-4e76-9921-8911ce80452e/test-operator-logs-container/0.log" Feb 28 05:31:32 crc kubenswrapper[5014]: I0228 05:31:32.359776 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv_5551729e-bd25-4c6c-b3d6-24a339aeab5c/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:31:40 crc kubenswrapper[5014]: I0228 05:31:40.330401 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_1420f298-151a-48af-bdb2-a58d5143967c/memcached/0.log" Feb 28 05:31:57 crc kubenswrapper[5014]: I0228 05:31:57.689851 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-5d87c9d997-587tn_52707aa4-b40d-4046-a721-e3b31a1f9648/manager/0.log" Feb 28 05:31:58 crc kubenswrapper[5014]: I0228 05:31:58.071052 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm_bf32d2bd-8642-45d7-ae34-876531251b37/util/0.log" Feb 28 05:31:58 crc kubenswrapper[5014]: I0228 05:31:58.263410 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm_bf32d2bd-8642-45d7-ae34-876531251b37/pull/0.log" Feb 28 05:31:58 crc kubenswrapper[5014]: I0228 05:31:58.302418 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm_bf32d2bd-8642-45d7-ae34-876531251b37/util/0.log" Feb 28 05:31:58 crc kubenswrapper[5014]: I0228 05:31:58.474081 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm_bf32d2bd-8642-45d7-ae34-876531251b37/pull/0.log" Feb 28 05:31:58 crc kubenswrapper[5014]: I0228 05:31:58.678109 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm_bf32d2bd-8642-45d7-ae34-876531251b37/pull/0.log" Feb 28 05:31:58 crc kubenswrapper[5014]: I0228 05:31:58.712237 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm_bf32d2bd-8642-45d7-ae34-876531251b37/util/0.log" Feb 28 05:31:58 crc kubenswrapper[5014]: I0228 05:31:58.805397 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-n9r5r_385767a3-7908-4f17-9f63-ea25c784c715/manager/0.log" Feb 28 05:31:58 crc kubenswrapper[5014]: I0228 05:31:58.844100 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm_bf32d2bd-8642-45d7-ae34-876531251b37/extract/0.log" Feb 28 05:31:59 crc kubenswrapper[5014]: I0228 05:31:59.101005 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-64db6967f8-5t42k_f734a97b-b94d-4132-a426-15111b3fc207/manager/0.log" Feb 28 05:31:59 crc kubenswrapper[5014]: I0228 05:31:59.194407 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-cf99c678f-2srvx_9fe3aab0-3f3b-4fb3-a5da-2206ba55e813/manager/0.log" Feb 28 05:31:59 crc kubenswrapper[5014]: I0228 05:31:59.341782 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-78bc7f9bd9-ppf6c_5b9d913b-e0e8-42f5-8d98-60fd3c219ff8/manager/0.log" Feb 28 05:31:59 crc kubenswrapper[5014]: I0228 05:31:59.625760 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-545456dc4-7bmg5_dd26043c-48bc-4202-8266-d2590b6530e3/manager/0.log" Feb 28 05:31:59 crc kubenswrapper[5014]: I0228 05:31:59.896778 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-786bd545f6-8hp88_0535be64-bda6-4b55-9eb1-fe5a86d3cae8/manager/0.log" Feb 28 05:31:59 crc kubenswrapper[5014]: I0228 05:31:59.911352 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7c789f89c6-cfb47_42fc68c6-e92f-4449-9398-518f904c58fb/manager/0.log" Feb 28 05:32:00 crc kubenswrapper[5014]: I0228 05:32:00.144847 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537612-jf9rb"] Feb 28 05:32:00 crc kubenswrapper[5014]: E0228 05:32:00.145296 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="582e47a4-0e8b-4491-91f8-5f9b33b3023b" containerName="container-00" Feb 28 05:32:00 crc kubenswrapper[5014]: I0228 05:32:00.145315 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="582e47a4-0e8b-4491-91f8-5f9b33b3023b" containerName="container-00" Feb 28 05:32:00 crc kubenswrapper[5014]: I0228 05:32:00.145522 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="582e47a4-0e8b-4491-91f8-5f9b33b3023b" containerName="container-00" Feb 28 05:32:00 crc kubenswrapper[5014]: I0228 05:32:00.146174 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537612-jf9rb" Feb 28 05:32:00 crc kubenswrapper[5014]: I0228 05:32:00.149324 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:32:00 crc kubenswrapper[5014]: I0228 05:32:00.149951 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:32:00 crc kubenswrapper[5014]: I0228 05:32:00.150242 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:32:00 crc kubenswrapper[5014]: I0228 05:32:00.166750 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537612-jf9rb"] Feb 28 05:32:00 crc kubenswrapper[5014]: I0228 05:32:00.204594 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-gm8rn_5189b3c2-1b93-432b-b1a3-dc579ef2abb6/manager/0.log" Feb 28 05:32:00 crc kubenswrapper[5014]: I0228 05:32:00.271155 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fc664\" (UniqueName: \"kubernetes.io/projected/cfd07fbb-9d7f-44c6-9d7f-2ba4d4b859b0-kube-api-access-fc664\") pod \"auto-csr-approver-29537612-jf9rb\" (UID: \"cfd07fbb-9d7f-44c6-9d7f-2ba4d4b859b0\") " pod="openshift-infra/auto-csr-approver-29537612-jf9rb" Feb 28 05:32:00 crc kubenswrapper[5014]: I0228 05:32:00.339543 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b6bfb6475-s4j6f_5f8b5a91-a57a-4679-a625-007592105038/manager/0.log" Feb 28 05:32:00 crc kubenswrapper[5014]: I0228 05:32:00.372652 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fc664\" (UniqueName: \"kubernetes.io/projected/cfd07fbb-9d7f-44c6-9d7f-2ba4d4b859b0-kube-api-access-fc664\") pod \"auto-csr-approver-29537612-jf9rb\" (UID: \"cfd07fbb-9d7f-44c6-9d7f-2ba4d4b859b0\") " pod="openshift-infra/auto-csr-approver-29537612-jf9rb" Feb 28 05:32:00 crc kubenswrapper[5014]: I0228 05:32:00.393008 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fc664\" (UniqueName: \"kubernetes.io/projected/cfd07fbb-9d7f-44c6-9d7f-2ba4d4b859b0-kube-api-access-fc664\") pod \"auto-csr-approver-29537612-jf9rb\" (UID: \"cfd07fbb-9d7f-44c6-9d7f-2ba4d4b859b0\") " pod="openshift-infra/auto-csr-approver-29537612-jf9rb" Feb 28 05:32:00 crc kubenswrapper[5014]: I0228 05:32:00.474415 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537612-jf9rb" Feb 28 05:32:00 crc kubenswrapper[5014]: I0228 05:32:00.504992 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54688575f-xpd29_f5555801-1739-45d3-946f-3b731b87c593/manager/0.log" Feb 28 05:32:00 crc kubenswrapper[5014]: I0228 05:32:00.855399 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5d86c7ddb7-pl8nn_895709de-d62e-4101-8294-d73238790d9c/manager/0.log" Feb 28 05:32:00 crc kubenswrapper[5014]: I0228 05:32:00.877264 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-74b6b5dc96-rcb7d_d56f3210-6165-4bd1-b2e0-d8eb94b370a9/manager/0.log" Feb 28 05:32:01 crc kubenswrapper[5014]: I0228 05:32:01.010561 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537612-jf9rb"] Feb 28 05:32:01 crc kubenswrapper[5014]: I0228 05:32:01.100902 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj_7c84fa60-3777-4544-84ce-abc199e9df18/manager/0.log" Feb 28 05:32:01 crc kubenswrapper[5014]: I0228 05:32:01.363781 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537612-jf9rb" event={"ID":"cfd07fbb-9d7f-44c6-9d7f-2ba4d4b859b0","Type":"ContainerStarted","Data":"132cf79e5cfb120dc00ca9289a31627dc2a02cc444a394fe008c52f6c7c81c56"} Feb 28 05:32:01 crc kubenswrapper[5014]: I0228 05:32:01.401318 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-dddf4b8c5-khjpf_d6538cec-6b14-4d19-92b6-e1ada175e8a8/operator/0.log" Feb 28 05:32:01 crc kubenswrapper[5014]: I0228 05:32:01.621619 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-2vp6x_55997ed6-05a0-420d-bdaf-5d27ea9e0cf2/registry-server/0.log" Feb 28 05:32:01 crc kubenswrapper[5014]: I0228 05:32:01.932984 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-75684d597f-pg7jw_67e3c7dc-a78f-4039-b326-93795dd322ca/manager/0.log" Feb 28 05:32:02 crc kubenswrapper[5014]: I0228 05:32:02.238333 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-648564c9fc-gjdqb_07f212b7-6aea-4a43-95fa-4637b6dc1d87/manager/0.log" Feb 28 05:32:02 crc kubenswrapper[5014]: I0228 05:32:02.323328 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-c4ptt_90ad3ca4-2470-4ab2-9e22-17db53a7237d/operator/0.log" Feb 28 05:32:02 crc kubenswrapper[5014]: I0228 05:32:02.375694 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537612-jf9rb" event={"ID":"cfd07fbb-9d7f-44c6-9d7f-2ba4d4b859b0","Type":"ContainerStarted","Data":"fdf791af0089a7da1458fbdf1340d8e7b595d33219f633590233ed4472fc0d05"} Feb 28 05:32:02 crc kubenswrapper[5014]: I0228 05:32:02.391299 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537612-jf9rb" podStartSLOduration=1.521694118 podStartE2EDuration="2.39128409s" podCreationTimestamp="2026-02-28 05:32:00 +0000 UTC" firstStartedPulling="2026-02-28 05:32:01.013965797 +0000 UTC m=+3509.684091707" lastFinishedPulling="2026-02-28 05:32:01.883555769 +0000 UTC m=+3510.553681679" observedRunningTime="2026-02-28 05:32:02.387463497 +0000 UTC m=+3511.057589397" watchObservedRunningTime="2026-02-28 05:32:02.39128409 +0000 UTC m=+3511.061409990" Feb 28 05:32:02 crc kubenswrapper[5014]: I0228 05:32:02.637246 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9b9ff9f4d-snccq_f254469c-2cb3-4f38-8c52-960aa17d27fe/manager/0.log" Feb 28 05:32:02 crc kubenswrapper[5014]: I0228 05:32:02.797271 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5fdb694969-82d7x_1089c9f7-0d91-4639-9890-c41acc881797/manager/0.log" Feb 28 05:32:02 crc kubenswrapper[5014]: I0228 05:32:02.864669 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-55b5ff4dbb-clg6t_d9ccc996-b3d9-44f1-8a6e-c58517885a7c/manager/0.log" Feb 28 05:32:03 crc kubenswrapper[5014]: I0228 05:32:03.031098 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-975zn_f229b3d6-46dd-42ab-bb96-c207b02b35d0/manager/0.log" Feb 28 05:32:03 crc kubenswrapper[5014]: I0228 05:32:03.334016 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-76974fc5d7-9d7k5_b65e9823-17a7-42da-9191-af1db70355b9/manager/0.log" Feb 28 05:32:03 crc kubenswrapper[5014]: I0228 05:32:03.384553 5014 generic.go:334] "Generic (PLEG): container finished" podID="cfd07fbb-9d7f-44c6-9d7f-2ba4d4b859b0" containerID="fdf791af0089a7da1458fbdf1340d8e7b595d33219f633590233ed4472fc0d05" exitCode=0 Feb 28 05:32:03 crc kubenswrapper[5014]: I0228 05:32:03.384618 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537612-jf9rb" event={"ID":"cfd07fbb-9d7f-44c6-9d7f-2ba4d4b859b0","Type":"ContainerDied","Data":"fdf791af0089a7da1458fbdf1340d8e7b595d33219f633590233ed4472fc0d05"} Feb 28 05:32:03 crc kubenswrapper[5014]: I0228 05:32:03.969505 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6db6876945-p2g4k_7dfedb71-1284-4e5c-826d-efb134b34cdb/manager/0.log" Feb 28 05:32:04 crc kubenswrapper[5014]: I0228 05:32:04.734640 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537612-jf9rb" Feb 28 05:32:04 crc kubenswrapper[5014]: I0228 05:32:04.875728 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fc664\" (UniqueName: \"kubernetes.io/projected/cfd07fbb-9d7f-44c6-9d7f-2ba4d4b859b0-kube-api-access-fc664\") pod \"cfd07fbb-9d7f-44c6-9d7f-2ba4d4b859b0\" (UID: \"cfd07fbb-9d7f-44c6-9d7f-2ba4d4b859b0\") " Feb 28 05:32:04 crc kubenswrapper[5014]: I0228 05:32:04.885289 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfd07fbb-9d7f-44c6-9d7f-2ba4d4b859b0-kube-api-access-fc664" (OuterVolumeSpecName: "kube-api-access-fc664") pod "cfd07fbb-9d7f-44c6-9d7f-2ba4d4b859b0" (UID: "cfd07fbb-9d7f-44c6-9d7f-2ba4d4b859b0"). InnerVolumeSpecName "kube-api-access-fc664". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:32:04 crc kubenswrapper[5014]: I0228 05:32:04.979197 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fc664\" (UniqueName: \"kubernetes.io/projected/cfd07fbb-9d7f-44c6-9d7f-2ba4d4b859b0-kube-api-access-fc664\") on node \"crc\" DevicePath \"\"" Feb 28 05:32:05 crc kubenswrapper[5014]: I0228 05:32:05.276166 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537606-9stjj"] Feb 28 05:32:05 crc kubenswrapper[5014]: I0228 05:32:05.284505 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537606-9stjj"] Feb 28 05:32:05 crc kubenswrapper[5014]: I0228 05:32:05.402081 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537612-jf9rb" event={"ID":"cfd07fbb-9d7f-44c6-9d7f-2ba4d4b859b0","Type":"ContainerDied","Data":"132cf79e5cfb120dc00ca9289a31627dc2a02cc444a394fe008c52f6c7c81c56"} Feb 28 05:32:05 crc kubenswrapper[5014]: I0228 05:32:05.402109 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537612-jf9rb" Feb 28 05:32:05 crc kubenswrapper[5014]: I0228 05:32:05.402115 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="132cf79e5cfb120dc00ca9289a31627dc2a02cc444a394fe008c52f6c7c81c56" Feb 28 05:32:06 crc kubenswrapper[5014]: I0228 05:32:06.181981 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44d4ce55-e715-43aa-bc81-b8ae78482fb3" path="/var/lib/kubelet/pods/44d4ce55-e715-43aa-bc81-b8ae78482fb3/volumes" Feb 28 05:32:22 crc kubenswrapper[5014]: I0228 05:32:22.915737 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-z87qr_8bf7d4c6-1fd5-4fa4-a7a3-bf5af08d7eba/control-plane-machine-set-operator/0.log" Feb 28 05:32:23 crc kubenswrapper[5014]: I0228 05:32:23.088718 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bpskb_c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6/kube-rbac-proxy/0.log" Feb 28 05:32:23 crc kubenswrapper[5014]: I0228 05:32:23.135904 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bpskb_c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6/machine-api-operator/0.log" Feb 28 05:32:36 crc kubenswrapper[5014]: I0228 05:32:36.872396 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-lnv49_efbeff5a-c04c-47c0-8c97-338798ffc76b/cert-manager-controller/0.log" Feb 28 05:32:37 crc kubenswrapper[5014]: I0228 05:32:37.023199 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-pwx6w_74306563-899f-44f1-b51a-e9aed7bd437c/cert-manager-cainjector/0.log" Feb 28 05:32:37 crc kubenswrapper[5014]: I0228 05:32:37.080114 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-gwzqp_f921b55b-c9e9-4183-a430-192642dc2b06/cert-manager-webhook/0.log" Feb 28 05:32:45 crc kubenswrapper[5014]: I0228 05:32:45.706942 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:32:45 crc kubenswrapper[5014]: I0228 05:32:45.709683 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:32:50 crc kubenswrapper[5014]: I0228 05:32:50.367793 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-9dtw6_77ea3bfd-fad5-4789-8930-d7b7148453b2/nmstate-console-plugin/0.log" Feb 28 05:32:50 crc kubenswrapper[5014]: I0228 05:32:50.566533 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-qn5jv_057e43b5-a9ff-43d5-9f75-e9add271d1a6/nmstate-handler/0.log" Feb 28 05:32:50 crc kubenswrapper[5014]: I0228 05:32:50.642016 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-qktq9_72580a24-d267-4917-955f-639fb9600a27/kube-rbac-proxy/0.log" Feb 28 05:32:50 crc kubenswrapper[5014]: I0228 05:32:50.703638 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-qktq9_72580a24-d267-4917-955f-639fb9600a27/nmstate-metrics/0.log" Feb 28 05:32:50 crc kubenswrapper[5014]: I0228 05:32:50.842697 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-hdp54_1a5c4be4-d285-425e-bd4b-26cbf4d48b0e/nmstate-operator/0.log" Feb 28 05:32:51 crc kubenswrapper[5014]: I0228 05:32:51.014382 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-lpxlh_7116c3b6-8ec4-42af-9739-9c4b1ea6e7c6/nmstate-webhook/0.log" Feb 28 05:32:58 crc kubenswrapper[5014]: I0228 05:32:58.492227 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nv4j8"] Feb 28 05:32:58 crc kubenswrapper[5014]: E0228 05:32:58.493229 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd07fbb-9d7f-44c6-9d7f-2ba4d4b859b0" containerName="oc" Feb 28 05:32:58 crc kubenswrapper[5014]: I0228 05:32:58.493243 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd07fbb-9d7f-44c6-9d7f-2ba4d4b859b0" containerName="oc" Feb 28 05:32:58 crc kubenswrapper[5014]: I0228 05:32:58.493458 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfd07fbb-9d7f-44c6-9d7f-2ba4d4b859b0" containerName="oc" Feb 28 05:32:58 crc kubenswrapper[5014]: I0228 05:32:58.506783 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nv4j8" Feb 28 05:32:58 crc kubenswrapper[5014]: I0228 05:32:58.507674 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nv4j8"] Feb 28 05:32:58 crc kubenswrapper[5014]: I0228 05:32:58.616633 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlcb8\" (UniqueName: \"kubernetes.io/projected/c25f0f9d-f9e9-4969-9020-2eb10918f693-kube-api-access-dlcb8\") pod \"certified-operators-nv4j8\" (UID: \"c25f0f9d-f9e9-4969-9020-2eb10918f693\") " pod="openshift-marketplace/certified-operators-nv4j8" Feb 28 05:32:58 crc kubenswrapper[5014]: I0228 05:32:58.616878 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c25f0f9d-f9e9-4969-9020-2eb10918f693-utilities\") pod \"certified-operators-nv4j8\" (UID: \"c25f0f9d-f9e9-4969-9020-2eb10918f693\") " pod="openshift-marketplace/certified-operators-nv4j8" Feb 28 05:32:58 crc kubenswrapper[5014]: I0228 05:32:58.616972 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c25f0f9d-f9e9-4969-9020-2eb10918f693-catalog-content\") pod \"certified-operators-nv4j8\" (UID: \"c25f0f9d-f9e9-4969-9020-2eb10918f693\") " pod="openshift-marketplace/certified-operators-nv4j8" Feb 28 05:32:58 crc kubenswrapper[5014]: I0228 05:32:58.719882 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c25f0f9d-f9e9-4969-9020-2eb10918f693-utilities\") pod \"certified-operators-nv4j8\" (UID: \"c25f0f9d-f9e9-4969-9020-2eb10918f693\") " pod="openshift-marketplace/certified-operators-nv4j8" Feb 28 05:32:58 crc kubenswrapper[5014]: I0228 05:32:58.719974 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c25f0f9d-f9e9-4969-9020-2eb10918f693-catalog-content\") pod \"certified-operators-nv4j8\" (UID: \"c25f0f9d-f9e9-4969-9020-2eb10918f693\") " pod="openshift-marketplace/certified-operators-nv4j8" Feb 28 05:32:58 crc kubenswrapper[5014]: I0228 05:32:58.720119 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlcb8\" (UniqueName: \"kubernetes.io/projected/c25f0f9d-f9e9-4969-9020-2eb10918f693-kube-api-access-dlcb8\") pod \"certified-operators-nv4j8\" (UID: \"c25f0f9d-f9e9-4969-9020-2eb10918f693\") " pod="openshift-marketplace/certified-operators-nv4j8" Feb 28 05:32:58 crc kubenswrapper[5014]: I0228 05:32:58.721210 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c25f0f9d-f9e9-4969-9020-2eb10918f693-utilities\") pod \"certified-operators-nv4j8\" (UID: \"c25f0f9d-f9e9-4969-9020-2eb10918f693\") " pod="openshift-marketplace/certified-operators-nv4j8" Feb 28 05:32:58 crc kubenswrapper[5014]: I0228 05:32:58.721508 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c25f0f9d-f9e9-4969-9020-2eb10918f693-catalog-content\") pod \"certified-operators-nv4j8\" (UID: \"c25f0f9d-f9e9-4969-9020-2eb10918f693\") " pod="openshift-marketplace/certified-operators-nv4j8" Feb 28 05:32:58 crc kubenswrapper[5014]: I0228 05:32:58.749601 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlcb8\" (UniqueName: \"kubernetes.io/projected/c25f0f9d-f9e9-4969-9020-2eb10918f693-kube-api-access-dlcb8\") pod \"certified-operators-nv4j8\" (UID: \"c25f0f9d-f9e9-4969-9020-2eb10918f693\") " pod="openshift-marketplace/certified-operators-nv4j8" Feb 28 05:32:58 crc kubenswrapper[5014]: I0228 05:32:58.837029 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nv4j8" Feb 28 05:32:59 crc kubenswrapper[5014]: I0228 05:32:59.321132 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nv4j8"] Feb 28 05:32:59 crc kubenswrapper[5014]: I0228 05:32:59.340103 5014 scope.go:117] "RemoveContainer" containerID="ebf60cc84e8d3bf9a622f3a61a1c04ceb0413fe7c4cb0d63e48c36047eaaae35" Feb 28 05:32:59 crc kubenswrapper[5014]: I0228 05:32:59.870848 5014 generic.go:334] "Generic (PLEG): container finished" podID="c25f0f9d-f9e9-4969-9020-2eb10918f693" containerID="d784f7e7bea14e8e16a6ebd6581ed8adcbfddfafcf063b0e0977a8d54c0f387b" exitCode=0 Feb 28 05:32:59 crc kubenswrapper[5014]: I0228 05:32:59.870907 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nv4j8" event={"ID":"c25f0f9d-f9e9-4969-9020-2eb10918f693","Type":"ContainerDied","Data":"d784f7e7bea14e8e16a6ebd6581ed8adcbfddfafcf063b0e0977a8d54c0f387b"} Feb 28 05:32:59 crc kubenswrapper[5014]: I0228 05:32:59.871096 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nv4j8" event={"ID":"c25f0f9d-f9e9-4969-9020-2eb10918f693","Type":"ContainerStarted","Data":"c0d7212fdc8e7f491bc727b14a009be6deb1fff63f44f3cb10df3745cdc7a651"} Feb 28 05:33:00 crc kubenswrapper[5014]: I0228 05:33:00.881948 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nv4j8" event={"ID":"c25f0f9d-f9e9-4969-9020-2eb10918f693","Type":"ContainerStarted","Data":"b77b526ce3e82b88d2198405ce29514b90ec4b65b2e0486517577720e701c17f"} Feb 28 05:33:01 crc kubenswrapper[5014]: I0228 05:33:01.893309 5014 generic.go:334] "Generic (PLEG): container finished" podID="c25f0f9d-f9e9-4969-9020-2eb10918f693" containerID="b77b526ce3e82b88d2198405ce29514b90ec4b65b2e0486517577720e701c17f" exitCode=0 Feb 28 05:33:01 crc kubenswrapper[5014]: I0228 05:33:01.893353 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nv4j8" event={"ID":"c25f0f9d-f9e9-4969-9020-2eb10918f693","Type":"ContainerDied","Data":"b77b526ce3e82b88d2198405ce29514b90ec4b65b2e0486517577720e701c17f"} Feb 28 05:33:02 crc kubenswrapper[5014]: I0228 05:33:02.906420 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nv4j8" event={"ID":"c25f0f9d-f9e9-4969-9020-2eb10918f693","Type":"ContainerStarted","Data":"da34fccb23118aa34bfe16c3a263db770b1dcfab554f5d7b9dbab599c6ec3514"} Feb 28 05:33:02 crc kubenswrapper[5014]: I0228 05:33:02.941150 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nv4j8" podStartSLOduration=2.473661694 podStartE2EDuration="4.941123139s" podCreationTimestamp="2026-02-28 05:32:58 +0000 UTC" firstStartedPulling="2026-02-28 05:32:59.872976878 +0000 UTC m=+3568.543102788" lastFinishedPulling="2026-02-28 05:33:02.340438323 +0000 UTC m=+3571.010564233" observedRunningTime="2026-02-28 05:33:02.926451182 +0000 UTC m=+3571.596577102" watchObservedRunningTime="2026-02-28 05:33:02.941123139 +0000 UTC m=+3571.611249089" Feb 28 05:33:08 crc kubenswrapper[5014]: I0228 05:33:08.837459 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nv4j8" Feb 28 05:33:08 crc kubenswrapper[5014]: I0228 05:33:08.838071 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nv4j8" Feb 28 05:33:08 crc kubenswrapper[5014]: I0228 05:33:08.891302 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nv4j8" Feb 28 05:33:09 crc kubenswrapper[5014]: I0228 05:33:09.038841 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nv4j8" Feb 28 05:33:09 crc kubenswrapper[5014]: I0228 05:33:09.144032 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nv4j8"] Feb 28 05:33:10 crc kubenswrapper[5014]: I0228 05:33:10.981972 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nv4j8" podUID="c25f0f9d-f9e9-4969-9020-2eb10918f693" containerName="registry-server" containerID="cri-o://da34fccb23118aa34bfe16c3a263db770b1dcfab554f5d7b9dbab599c6ec3514" gracePeriod=2 Feb 28 05:33:11 crc kubenswrapper[5014]: I0228 05:33:11.449957 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nv4j8" Feb 28 05:33:11 crc kubenswrapper[5014]: I0228 05:33:11.571362 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c25f0f9d-f9e9-4969-9020-2eb10918f693-catalog-content\") pod \"c25f0f9d-f9e9-4969-9020-2eb10918f693\" (UID: \"c25f0f9d-f9e9-4969-9020-2eb10918f693\") " Feb 28 05:33:11 crc kubenswrapper[5014]: I0228 05:33:11.571927 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlcb8\" (UniqueName: \"kubernetes.io/projected/c25f0f9d-f9e9-4969-9020-2eb10918f693-kube-api-access-dlcb8\") pod \"c25f0f9d-f9e9-4969-9020-2eb10918f693\" (UID: \"c25f0f9d-f9e9-4969-9020-2eb10918f693\") " Feb 28 05:33:11 crc kubenswrapper[5014]: I0228 05:33:11.572119 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c25f0f9d-f9e9-4969-9020-2eb10918f693-utilities\") pod \"c25f0f9d-f9e9-4969-9020-2eb10918f693\" (UID: \"c25f0f9d-f9e9-4969-9020-2eb10918f693\") " Feb 28 05:33:11 crc kubenswrapper[5014]: I0228 05:33:11.573049 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c25f0f9d-f9e9-4969-9020-2eb10918f693-utilities" (OuterVolumeSpecName: "utilities") pod "c25f0f9d-f9e9-4969-9020-2eb10918f693" (UID: "c25f0f9d-f9e9-4969-9020-2eb10918f693"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:33:11 crc kubenswrapper[5014]: I0228 05:33:11.586105 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c25f0f9d-f9e9-4969-9020-2eb10918f693-kube-api-access-dlcb8" (OuterVolumeSpecName: "kube-api-access-dlcb8") pod "c25f0f9d-f9e9-4969-9020-2eb10918f693" (UID: "c25f0f9d-f9e9-4969-9020-2eb10918f693"). InnerVolumeSpecName "kube-api-access-dlcb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:33:11 crc kubenswrapper[5014]: I0228 05:33:11.621825 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c25f0f9d-f9e9-4969-9020-2eb10918f693-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c25f0f9d-f9e9-4969-9020-2eb10918f693" (UID: "c25f0f9d-f9e9-4969-9020-2eb10918f693"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:33:11 crc kubenswrapper[5014]: I0228 05:33:11.674553 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c25f0f9d-f9e9-4969-9020-2eb10918f693-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 05:33:11 crc kubenswrapper[5014]: I0228 05:33:11.674783 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c25f0f9d-f9e9-4969-9020-2eb10918f693-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 05:33:11 crc kubenswrapper[5014]: I0228 05:33:11.674881 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlcb8\" (UniqueName: \"kubernetes.io/projected/c25f0f9d-f9e9-4969-9020-2eb10918f693-kube-api-access-dlcb8\") on node \"crc\" DevicePath \"\"" Feb 28 05:33:11 crc kubenswrapper[5014]: I0228 05:33:11.991029 5014 generic.go:334] "Generic (PLEG): container finished" podID="c25f0f9d-f9e9-4969-9020-2eb10918f693" containerID="da34fccb23118aa34bfe16c3a263db770b1dcfab554f5d7b9dbab599c6ec3514" exitCode=0 Feb 28 05:33:11 crc kubenswrapper[5014]: I0228 05:33:11.991076 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nv4j8" event={"ID":"c25f0f9d-f9e9-4969-9020-2eb10918f693","Type":"ContainerDied","Data":"da34fccb23118aa34bfe16c3a263db770b1dcfab554f5d7b9dbab599c6ec3514"} Feb 28 05:33:11 crc kubenswrapper[5014]: I0228 05:33:11.991109 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nv4j8" event={"ID":"c25f0f9d-f9e9-4969-9020-2eb10918f693","Type":"ContainerDied","Data":"c0d7212fdc8e7f491bc727b14a009be6deb1fff63f44f3cb10df3745cdc7a651"} Feb 28 05:33:11 crc kubenswrapper[5014]: I0228 05:33:11.991130 5014 scope.go:117] "RemoveContainer" containerID="da34fccb23118aa34bfe16c3a263db770b1dcfab554f5d7b9dbab599c6ec3514" Feb 28 05:33:11 crc kubenswrapper[5014]: I0228 05:33:11.991973 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nv4j8" Feb 28 05:33:12 crc kubenswrapper[5014]: I0228 05:33:12.010151 5014 scope.go:117] "RemoveContainer" containerID="b77b526ce3e82b88d2198405ce29514b90ec4b65b2e0486517577720e701c17f" Feb 28 05:33:12 crc kubenswrapper[5014]: I0228 05:33:12.030248 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nv4j8"] Feb 28 05:33:12 crc kubenswrapper[5014]: I0228 05:33:12.040248 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nv4j8"] Feb 28 05:33:12 crc kubenswrapper[5014]: I0228 05:33:12.045474 5014 scope.go:117] "RemoveContainer" containerID="d784f7e7bea14e8e16a6ebd6581ed8adcbfddfafcf063b0e0977a8d54c0f387b" Feb 28 05:33:12 crc kubenswrapper[5014]: I0228 05:33:12.076838 5014 scope.go:117] "RemoveContainer" containerID="da34fccb23118aa34bfe16c3a263db770b1dcfab554f5d7b9dbab599c6ec3514" Feb 28 05:33:12 crc kubenswrapper[5014]: E0228 05:33:12.078131 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da34fccb23118aa34bfe16c3a263db770b1dcfab554f5d7b9dbab599c6ec3514\": container with ID starting with da34fccb23118aa34bfe16c3a263db770b1dcfab554f5d7b9dbab599c6ec3514 not found: ID does not exist" containerID="da34fccb23118aa34bfe16c3a263db770b1dcfab554f5d7b9dbab599c6ec3514" Feb 28 05:33:12 crc kubenswrapper[5014]: I0228 05:33:12.078188 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da34fccb23118aa34bfe16c3a263db770b1dcfab554f5d7b9dbab599c6ec3514"} err="failed to get container status \"da34fccb23118aa34bfe16c3a263db770b1dcfab554f5d7b9dbab599c6ec3514\": rpc error: code = NotFound desc = could not find container \"da34fccb23118aa34bfe16c3a263db770b1dcfab554f5d7b9dbab599c6ec3514\": container with ID starting with da34fccb23118aa34bfe16c3a263db770b1dcfab554f5d7b9dbab599c6ec3514 not found: ID does not exist" Feb 28 05:33:12 crc kubenswrapper[5014]: I0228 05:33:12.078232 5014 scope.go:117] "RemoveContainer" containerID="b77b526ce3e82b88d2198405ce29514b90ec4b65b2e0486517577720e701c17f" Feb 28 05:33:12 crc kubenswrapper[5014]: E0228 05:33:12.078566 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b77b526ce3e82b88d2198405ce29514b90ec4b65b2e0486517577720e701c17f\": container with ID starting with b77b526ce3e82b88d2198405ce29514b90ec4b65b2e0486517577720e701c17f not found: ID does not exist" containerID="b77b526ce3e82b88d2198405ce29514b90ec4b65b2e0486517577720e701c17f" Feb 28 05:33:12 crc kubenswrapper[5014]: I0228 05:33:12.078589 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b77b526ce3e82b88d2198405ce29514b90ec4b65b2e0486517577720e701c17f"} err="failed to get container status \"b77b526ce3e82b88d2198405ce29514b90ec4b65b2e0486517577720e701c17f\": rpc error: code = NotFound desc = could not find container \"b77b526ce3e82b88d2198405ce29514b90ec4b65b2e0486517577720e701c17f\": container with ID starting with b77b526ce3e82b88d2198405ce29514b90ec4b65b2e0486517577720e701c17f not found: ID does not exist" Feb 28 05:33:12 crc kubenswrapper[5014]: I0228 05:33:12.078621 5014 scope.go:117] "RemoveContainer" containerID="d784f7e7bea14e8e16a6ebd6581ed8adcbfddfafcf063b0e0977a8d54c0f387b" Feb 28 05:33:12 crc kubenswrapper[5014]: E0228 05:33:12.078875 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d784f7e7bea14e8e16a6ebd6581ed8adcbfddfafcf063b0e0977a8d54c0f387b\": container with ID starting with d784f7e7bea14e8e16a6ebd6581ed8adcbfddfafcf063b0e0977a8d54c0f387b not found: ID does not exist" containerID="d784f7e7bea14e8e16a6ebd6581ed8adcbfddfafcf063b0e0977a8d54c0f387b" Feb 28 05:33:12 crc kubenswrapper[5014]: I0228 05:33:12.078923 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d784f7e7bea14e8e16a6ebd6581ed8adcbfddfafcf063b0e0977a8d54c0f387b"} err="failed to get container status \"d784f7e7bea14e8e16a6ebd6581ed8adcbfddfafcf063b0e0977a8d54c0f387b\": rpc error: code = NotFound desc = could not find container \"d784f7e7bea14e8e16a6ebd6581ed8adcbfddfafcf063b0e0977a8d54c0f387b\": container with ID starting with d784f7e7bea14e8e16a6ebd6581ed8adcbfddfafcf063b0e0977a8d54c0f387b not found: ID does not exist" Feb 28 05:33:12 crc kubenswrapper[5014]: I0228 05:33:12.187122 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c25f0f9d-f9e9-4969-9020-2eb10918f693" path="/var/lib/kubelet/pods/c25f0f9d-f9e9-4969-9020-2eb10918f693/volumes" Feb 28 05:33:15 crc kubenswrapper[5014]: I0228 05:33:15.707167 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:33:15 crc kubenswrapper[5014]: I0228 05:33:15.707907 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:33:21 crc kubenswrapper[5014]: I0228 05:33:21.702045 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-tl2qx_52613a39-487f-4a3e-b2fb-97e969552377/kube-rbac-proxy/0.log" Feb 28 05:33:21 crc kubenswrapper[5014]: I0228 05:33:21.861849 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-tl2qx_52613a39-487f-4a3e-b2fb-97e969552377/controller/0.log" Feb 28 05:33:21 crc kubenswrapper[5014]: I0228 05:33:21.916426 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/cp-frr-files/0.log" Feb 28 05:33:22 crc kubenswrapper[5014]: I0228 05:33:22.062391 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/cp-metrics/0.log" Feb 28 05:33:22 crc kubenswrapper[5014]: I0228 05:33:22.064620 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/cp-reloader/0.log" Feb 28 05:33:22 crc kubenswrapper[5014]: I0228 05:33:22.123096 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/cp-frr-files/0.log" Feb 28 05:33:22 crc kubenswrapper[5014]: I0228 05:33:22.158234 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/cp-reloader/0.log" Feb 28 05:33:22 crc kubenswrapper[5014]: I0228 05:33:22.308831 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/cp-metrics/0.log" Feb 28 05:33:22 crc kubenswrapper[5014]: I0228 05:33:22.308994 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/cp-frr-files/0.log" Feb 28 05:33:22 crc kubenswrapper[5014]: I0228 05:33:22.343290 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/cp-reloader/0.log" Feb 28 05:33:22 crc kubenswrapper[5014]: I0228 05:33:22.352204 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/cp-metrics/0.log" Feb 28 05:33:22 crc kubenswrapper[5014]: I0228 05:33:22.492684 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/cp-metrics/0.log" Feb 28 05:33:22 crc kubenswrapper[5014]: I0228 05:33:22.512013 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/cp-frr-files/0.log" Feb 28 05:33:22 crc kubenswrapper[5014]: I0228 05:33:22.534167 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/cp-reloader/0.log" Feb 28 05:33:22 crc kubenswrapper[5014]: I0228 05:33:22.577672 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/controller/0.log" Feb 28 05:33:22 crc kubenswrapper[5014]: I0228 05:33:22.730153 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/kube-rbac-proxy/0.log" Feb 28 05:33:22 crc kubenswrapper[5014]: I0228 05:33:22.755136 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/frr-metrics/0.log" Feb 28 05:33:22 crc kubenswrapper[5014]: I0228 05:33:22.806504 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/kube-rbac-proxy-frr/0.log" Feb 28 05:33:23 crc kubenswrapper[5014]: I0228 05:33:23.017654 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/reloader/0.log" Feb 28 05:33:23 crc kubenswrapper[5014]: I0228 05:33:23.108119 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-vwrdt_d1916ff1-d765-4133-8db7-50b8c6c9d3da/frr-k8s-webhook-server/0.log" Feb 28 05:33:23 crc kubenswrapper[5014]: I0228 05:33:23.410450 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-c97d79cb8-9k7r6_7765e634-9939-4dca-82bc-847db81c81e4/manager/0.log" Feb 28 05:33:23 crc kubenswrapper[5014]: I0228 05:33:23.535049 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-75b5fcbdc5-txj9m_fec123b5-34af-438f-8a38-306d3484b235/webhook-server/0.log" Feb 28 05:33:23 crc kubenswrapper[5014]: I0228 05:33:23.656079 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-v6tb4_4e21c24c-ac78-4bff-863f-dfd7b10d0c7a/kube-rbac-proxy/0.log" Feb 28 05:33:23 crc kubenswrapper[5014]: I0228 05:33:23.951922 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/frr/0.log" Feb 28 05:33:24 crc kubenswrapper[5014]: I0228 05:33:24.046717 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-v6tb4_4e21c24c-ac78-4bff-863f-dfd7b10d0c7a/speaker/0.log" Feb 28 05:33:37 crc kubenswrapper[5014]: I0228 05:33:37.662146 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh_f0a92225-1e40-4c6b-af69-652221b1273a/util/0.log" Feb 28 05:33:37 crc kubenswrapper[5014]: I0228 05:33:37.845977 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh_f0a92225-1e40-4c6b-af69-652221b1273a/pull/0.log" Feb 28 05:33:37 crc kubenswrapper[5014]: I0228 05:33:37.850855 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh_f0a92225-1e40-4c6b-af69-652221b1273a/util/0.log" Feb 28 05:33:37 crc kubenswrapper[5014]: I0228 05:33:37.851879 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh_f0a92225-1e40-4c6b-af69-652221b1273a/pull/0.log" Feb 28 05:33:38 crc kubenswrapper[5014]: I0228 05:33:38.043318 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh_f0a92225-1e40-4c6b-af69-652221b1273a/util/0.log" Feb 28 05:33:38 crc kubenswrapper[5014]: I0228 05:33:38.050635 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh_f0a92225-1e40-4c6b-af69-652221b1273a/pull/0.log" Feb 28 05:33:38 crc kubenswrapper[5014]: I0228 05:33:38.053324 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh_f0a92225-1e40-4c6b-af69-652221b1273a/extract/0.log" Feb 28 05:33:38 crc kubenswrapper[5014]: I0228 05:33:38.188282 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kc599_1286af62-b972-4b45-a18b-f7e0085a1a69/extract-utilities/0.log" Feb 28 05:33:38 crc kubenswrapper[5014]: I0228 05:33:38.397436 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kc599_1286af62-b972-4b45-a18b-f7e0085a1a69/extract-utilities/0.log" Feb 28 05:33:38 crc kubenswrapper[5014]: I0228 05:33:38.414326 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kc599_1286af62-b972-4b45-a18b-f7e0085a1a69/extract-content/0.log" Feb 28 05:33:38 crc kubenswrapper[5014]: I0228 05:33:38.414387 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kc599_1286af62-b972-4b45-a18b-f7e0085a1a69/extract-content/0.log" Feb 28 05:33:38 crc kubenswrapper[5014]: I0228 05:33:38.574484 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kc599_1286af62-b972-4b45-a18b-f7e0085a1a69/extract-content/0.log" Feb 28 05:33:38 crc kubenswrapper[5014]: I0228 05:33:38.583440 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kc599_1286af62-b972-4b45-a18b-f7e0085a1a69/extract-utilities/0.log" Feb 28 05:33:38 crc kubenswrapper[5014]: I0228 05:33:38.796802 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rmvfd_e65a2cc1-a391-48ab-a843-e86f58cf278a/extract-utilities/0.log" Feb 28 05:33:38 crc kubenswrapper[5014]: I0228 05:33:38.993313 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rmvfd_e65a2cc1-a391-48ab-a843-e86f58cf278a/extract-content/0.log" Feb 28 05:33:39 crc kubenswrapper[5014]: I0228 05:33:39.002898 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rmvfd_e65a2cc1-a391-48ab-a843-e86f58cf278a/extract-utilities/0.log" Feb 28 05:33:39 crc kubenswrapper[5014]: I0228 05:33:39.045717 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rmvfd_e65a2cc1-a391-48ab-a843-e86f58cf278a/extract-content/0.log" Feb 28 05:33:39 crc kubenswrapper[5014]: I0228 05:33:39.185939 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rmvfd_e65a2cc1-a391-48ab-a843-e86f58cf278a/extract-content/0.log" Feb 28 05:33:39 crc kubenswrapper[5014]: I0228 05:33:39.194369 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rmvfd_e65a2cc1-a391-48ab-a843-e86f58cf278a/extract-utilities/0.log" Feb 28 05:33:39 crc kubenswrapper[5014]: I0228 05:33:39.228961 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kc599_1286af62-b972-4b45-a18b-f7e0085a1a69/registry-server/0.log" Feb 28 05:33:39 crc kubenswrapper[5014]: I0228 05:33:39.407824 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s_37c25bf9-a707-42db-9488-1cd660e44edc/util/0.log" Feb 28 05:33:39 crc kubenswrapper[5014]: I0228 05:33:39.669520 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s_37c25bf9-a707-42db-9488-1cd660e44edc/pull/0.log" Feb 28 05:33:39 crc kubenswrapper[5014]: I0228 05:33:39.670747 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s_37c25bf9-a707-42db-9488-1cd660e44edc/pull/0.log" Feb 28 05:33:39 crc kubenswrapper[5014]: I0228 05:33:39.704929 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s_37c25bf9-a707-42db-9488-1cd660e44edc/util/0.log" Feb 28 05:33:39 crc kubenswrapper[5014]: I0228 05:33:39.708293 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rmvfd_e65a2cc1-a391-48ab-a843-e86f58cf278a/registry-server/0.log" Feb 28 05:33:39 crc kubenswrapper[5014]: I0228 05:33:39.853050 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s_37c25bf9-a707-42db-9488-1cd660e44edc/util/0.log" Feb 28 05:33:39 crc kubenswrapper[5014]: I0228 05:33:39.857211 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s_37c25bf9-a707-42db-9488-1cd660e44edc/pull/0.log" Feb 28 05:33:39 crc kubenswrapper[5014]: I0228 05:33:39.899518 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s_37c25bf9-a707-42db-9488-1cd660e44edc/extract/0.log" Feb 28 05:33:40 crc kubenswrapper[5014]: I0228 05:33:40.005333 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-lz2dz_da5f8445-0b83-49d2-8255-21a4074cbf0b/marketplace-operator/0.log" Feb 28 05:33:40 crc kubenswrapper[5014]: I0228 05:33:40.109923 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n92hm_c78f8995-32df-4c90-9919-e5e6f53c16ed/extract-utilities/0.log" Feb 28 05:33:40 crc kubenswrapper[5014]: I0228 05:33:40.301909 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n92hm_c78f8995-32df-4c90-9919-e5e6f53c16ed/extract-content/0.log" Feb 28 05:33:40 crc kubenswrapper[5014]: I0228 05:33:40.308185 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n92hm_c78f8995-32df-4c90-9919-e5e6f53c16ed/extract-utilities/0.log" Feb 28 05:33:40 crc kubenswrapper[5014]: I0228 05:33:40.330030 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n92hm_c78f8995-32df-4c90-9919-e5e6f53c16ed/extract-content/0.log" Feb 28 05:33:40 crc kubenswrapper[5014]: I0228 05:33:40.450713 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n92hm_c78f8995-32df-4c90-9919-e5e6f53c16ed/extract-utilities/0.log" Feb 28 05:33:40 crc kubenswrapper[5014]: I0228 05:33:40.471203 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n92hm_c78f8995-32df-4c90-9919-e5e6f53c16ed/extract-content/0.log" Feb 28 05:33:40 crc kubenswrapper[5014]: I0228 05:33:40.586272 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n92hm_c78f8995-32df-4c90-9919-e5e6f53c16ed/registry-server/0.log" Feb 28 05:33:40 crc kubenswrapper[5014]: I0228 05:33:40.621940 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tfwt_86fe7b38-7d96-499b-a693-397309da77bd/extract-utilities/0.log" Feb 28 05:33:40 crc kubenswrapper[5014]: I0228 05:33:40.808794 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tfwt_86fe7b38-7d96-499b-a693-397309da77bd/extract-content/0.log" Feb 28 05:33:40 crc kubenswrapper[5014]: I0228 05:33:40.810913 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tfwt_86fe7b38-7d96-499b-a693-397309da77bd/extract-content/0.log" Feb 28 05:33:40 crc kubenswrapper[5014]: I0228 05:33:40.819961 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tfwt_86fe7b38-7d96-499b-a693-397309da77bd/extract-utilities/0.log" Feb 28 05:33:40 crc kubenswrapper[5014]: I0228 05:33:40.990011 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tfwt_86fe7b38-7d96-499b-a693-397309da77bd/extract-content/0.log" Feb 28 05:33:41 crc kubenswrapper[5014]: I0228 05:33:41.033547 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tfwt_86fe7b38-7d96-499b-a693-397309da77bd/extract-utilities/0.log" Feb 28 05:33:41 crc kubenswrapper[5014]: I0228 05:33:41.501070 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tfwt_86fe7b38-7d96-499b-a693-397309da77bd/registry-server/0.log" Feb 28 05:33:45 crc kubenswrapper[5014]: I0228 05:33:45.706768 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:33:45 crc kubenswrapper[5014]: I0228 05:33:45.707181 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:33:45 crc kubenswrapper[5014]: I0228 05:33:45.707228 5014 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 05:33:45 crc kubenswrapper[5014]: I0228 05:33:45.708059 5014 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ab09a27e103d3311268f3f8870f394e7de849ef8d8bdc4ab745c03fb930a3cfb"} pod="openshift-machine-config-operator/machine-config-daemon-cct62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 05:33:45 crc kubenswrapper[5014]: I0228 05:33:45.708117 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" containerID="cri-o://ab09a27e103d3311268f3f8870f394e7de849ef8d8bdc4ab745c03fb930a3cfb" gracePeriod=600 Feb 28 05:33:46 crc kubenswrapper[5014]: I0228 05:33:46.369269 5014 generic.go:334] "Generic (PLEG): container finished" podID="6aad0009-d904-48f8-8e30-82205907ece1" containerID="ab09a27e103d3311268f3f8870f394e7de849ef8d8bdc4ab745c03fb930a3cfb" exitCode=0 Feb 28 05:33:46 crc kubenswrapper[5014]: I0228 05:33:46.369343 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerDied","Data":"ab09a27e103d3311268f3f8870f394e7de849ef8d8bdc4ab745c03fb930a3cfb"} Feb 28 05:33:46 crc kubenswrapper[5014]: I0228 05:33:46.370005 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerStarted","Data":"40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19"} Feb 28 05:33:46 crc kubenswrapper[5014]: I0228 05:33:46.370042 5014 scope.go:117] "RemoveContainer" containerID="192dfe00bc68c47111b7f3bc09d6d50a5ae0c8daea27c02e7d24d0f8e808dd55" Feb 28 05:33:57 crc kubenswrapper[5014]: E0228 05:33:57.307488 5014 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.150:59058->38.102.83.150:33019: write tcp 38.102.83.150:59058->38.102.83.150:33019: write: broken pipe Feb 28 05:34:00 crc kubenswrapper[5014]: I0228 05:34:00.140206 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537614-mj8kv"] Feb 28 05:34:00 crc kubenswrapper[5014]: E0228 05:34:00.140853 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c25f0f9d-f9e9-4969-9020-2eb10918f693" containerName="extract-content" Feb 28 05:34:00 crc kubenswrapper[5014]: I0228 05:34:00.140865 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="c25f0f9d-f9e9-4969-9020-2eb10918f693" containerName="extract-content" Feb 28 05:34:00 crc kubenswrapper[5014]: E0228 05:34:00.140876 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c25f0f9d-f9e9-4969-9020-2eb10918f693" containerName="registry-server" Feb 28 05:34:00 crc kubenswrapper[5014]: I0228 05:34:00.140883 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="c25f0f9d-f9e9-4969-9020-2eb10918f693" containerName="registry-server" Feb 28 05:34:00 crc kubenswrapper[5014]: E0228 05:34:00.140908 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c25f0f9d-f9e9-4969-9020-2eb10918f693" containerName="extract-utilities" Feb 28 05:34:00 crc kubenswrapper[5014]: I0228 05:34:00.140915 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="c25f0f9d-f9e9-4969-9020-2eb10918f693" containerName="extract-utilities" Feb 28 05:34:00 crc kubenswrapper[5014]: I0228 05:34:00.141108 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="c25f0f9d-f9e9-4969-9020-2eb10918f693" containerName="registry-server" Feb 28 05:34:00 crc kubenswrapper[5014]: I0228 05:34:00.141667 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537614-mj8kv" Feb 28 05:34:00 crc kubenswrapper[5014]: I0228 05:34:00.144333 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:34:00 crc kubenswrapper[5014]: I0228 05:34:00.144349 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:34:00 crc kubenswrapper[5014]: I0228 05:34:00.145428 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:34:00 crc kubenswrapper[5014]: I0228 05:34:00.152616 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537614-mj8kv"] Feb 28 05:34:00 crc kubenswrapper[5014]: I0228 05:34:00.271999 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f762t\" (UniqueName: \"kubernetes.io/projected/7ed45fbe-6686-42e2-9d85-7da2fb54784c-kube-api-access-f762t\") pod \"auto-csr-approver-29537614-mj8kv\" (UID: \"7ed45fbe-6686-42e2-9d85-7da2fb54784c\") " pod="openshift-infra/auto-csr-approver-29537614-mj8kv" Feb 28 05:34:00 crc kubenswrapper[5014]: I0228 05:34:00.373369 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f762t\" (UniqueName: \"kubernetes.io/projected/7ed45fbe-6686-42e2-9d85-7da2fb54784c-kube-api-access-f762t\") pod \"auto-csr-approver-29537614-mj8kv\" (UID: \"7ed45fbe-6686-42e2-9d85-7da2fb54784c\") " pod="openshift-infra/auto-csr-approver-29537614-mj8kv" Feb 28 05:34:00 crc kubenswrapper[5014]: I0228 05:34:00.400463 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f762t\" (UniqueName: \"kubernetes.io/projected/7ed45fbe-6686-42e2-9d85-7da2fb54784c-kube-api-access-f762t\") pod \"auto-csr-approver-29537614-mj8kv\" (UID: \"7ed45fbe-6686-42e2-9d85-7da2fb54784c\") " pod="openshift-infra/auto-csr-approver-29537614-mj8kv" Feb 28 05:34:00 crc kubenswrapper[5014]: I0228 05:34:00.463015 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537614-mj8kv" Feb 28 05:34:01 crc kubenswrapper[5014]: W0228 05:34:00.999767 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ed45fbe_6686_42e2_9d85_7da2fb54784c.slice/crio-7aa93b4fd62d9ac8ddb5a6beddac08ef7f643256ee1cd81309ed6250cbab176a WatchSource:0}: Error finding container 7aa93b4fd62d9ac8ddb5a6beddac08ef7f643256ee1cd81309ed6250cbab176a: Status 404 returned error can't find the container with id 7aa93b4fd62d9ac8ddb5a6beddac08ef7f643256ee1cd81309ed6250cbab176a Feb 28 05:34:01 crc kubenswrapper[5014]: I0228 05:34:01.022383 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537614-mj8kv"] Feb 28 05:34:01 crc kubenswrapper[5014]: I0228 05:34:01.506562 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537614-mj8kv" event={"ID":"7ed45fbe-6686-42e2-9d85-7da2fb54784c","Type":"ContainerStarted","Data":"7aa93b4fd62d9ac8ddb5a6beddac08ef7f643256ee1cd81309ed6250cbab176a"} Feb 28 05:34:02 crc kubenswrapper[5014]: I0228 05:34:02.518250 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537614-mj8kv" event={"ID":"7ed45fbe-6686-42e2-9d85-7da2fb54784c","Type":"ContainerStarted","Data":"ead0c225a7dc35995ddbed05655df713bac795f55f959e804eacea7f3d3ff92c"} Feb 28 05:34:02 crc kubenswrapper[5014]: I0228 05:34:02.538440 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537614-mj8kv" podStartSLOduration=1.64115125 podStartE2EDuration="2.538413747s" podCreationTimestamp="2026-02-28 05:34:00 +0000 UTC" firstStartedPulling="2026-02-28 05:34:01.003456593 +0000 UTC m=+3629.673582503" lastFinishedPulling="2026-02-28 05:34:01.90071909 +0000 UTC m=+3630.570845000" observedRunningTime="2026-02-28 05:34:02.532523518 +0000 UTC m=+3631.202649428" watchObservedRunningTime="2026-02-28 05:34:02.538413747 +0000 UTC m=+3631.208539657" Feb 28 05:34:03 crc kubenswrapper[5014]: I0228 05:34:03.527602 5014 generic.go:334] "Generic (PLEG): container finished" podID="7ed45fbe-6686-42e2-9d85-7da2fb54784c" containerID="ead0c225a7dc35995ddbed05655df713bac795f55f959e804eacea7f3d3ff92c" exitCode=0 Feb 28 05:34:03 crc kubenswrapper[5014]: I0228 05:34:03.527661 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537614-mj8kv" event={"ID":"7ed45fbe-6686-42e2-9d85-7da2fb54784c","Type":"ContainerDied","Data":"ead0c225a7dc35995ddbed05655df713bac795f55f959e804eacea7f3d3ff92c"} Feb 28 05:34:04 crc kubenswrapper[5014]: I0228 05:34:04.933472 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537614-mj8kv" Feb 28 05:34:05 crc kubenswrapper[5014]: I0228 05:34:05.073369 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f762t\" (UniqueName: \"kubernetes.io/projected/7ed45fbe-6686-42e2-9d85-7da2fb54784c-kube-api-access-f762t\") pod \"7ed45fbe-6686-42e2-9d85-7da2fb54784c\" (UID: \"7ed45fbe-6686-42e2-9d85-7da2fb54784c\") " Feb 28 05:34:05 crc kubenswrapper[5014]: I0228 05:34:05.082068 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ed45fbe-6686-42e2-9d85-7da2fb54784c-kube-api-access-f762t" (OuterVolumeSpecName: "kube-api-access-f762t") pod "7ed45fbe-6686-42e2-9d85-7da2fb54784c" (UID: "7ed45fbe-6686-42e2-9d85-7da2fb54784c"). InnerVolumeSpecName "kube-api-access-f762t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:34:05 crc kubenswrapper[5014]: I0228 05:34:05.175613 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f762t\" (UniqueName: \"kubernetes.io/projected/7ed45fbe-6686-42e2-9d85-7da2fb54784c-kube-api-access-f762t\") on node \"crc\" DevicePath \"\"" Feb 28 05:34:05 crc kubenswrapper[5014]: I0228 05:34:05.274945 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537608-j482h"] Feb 28 05:34:05 crc kubenswrapper[5014]: I0228 05:34:05.297995 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537608-j482h"] Feb 28 05:34:05 crc kubenswrapper[5014]: I0228 05:34:05.544017 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537614-mj8kv" event={"ID":"7ed45fbe-6686-42e2-9d85-7da2fb54784c","Type":"ContainerDied","Data":"7aa93b4fd62d9ac8ddb5a6beddac08ef7f643256ee1cd81309ed6250cbab176a"} Feb 28 05:34:05 crc kubenswrapper[5014]: I0228 05:34:05.544051 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7aa93b4fd62d9ac8ddb5a6beddac08ef7f643256ee1cd81309ed6250cbab176a" Feb 28 05:34:05 crc kubenswrapper[5014]: I0228 05:34:05.544099 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537614-mj8kv" Feb 28 05:34:06 crc kubenswrapper[5014]: I0228 05:34:06.180389 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13789da7-2c4d-4304-97da-be6aae8dadaa" path="/var/lib/kubelet/pods/13789da7-2c4d-4304-97da-be6aae8dadaa/volumes" Feb 28 05:34:52 crc kubenswrapper[5014]: I0228 05:34:52.880424 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8xks6"] Feb 28 05:34:52 crc kubenswrapper[5014]: E0228 05:34:52.881579 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ed45fbe-6686-42e2-9d85-7da2fb54784c" containerName="oc" Feb 28 05:34:52 crc kubenswrapper[5014]: I0228 05:34:52.881601 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ed45fbe-6686-42e2-9d85-7da2fb54784c" containerName="oc" Feb 28 05:34:52 crc kubenswrapper[5014]: I0228 05:34:52.882003 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ed45fbe-6686-42e2-9d85-7da2fb54784c" containerName="oc" Feb 28 05:34:52 crc kubenswrapper[5014]: I0228 05:34:52.884420 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8xks6" Feb 28 05:34:52 crc kubenswrapper[5014]: I0228 05:34:52.902181 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xks6"] Feb 28 05:34:53 crc kubenswrapper[5014]: I0228 05:34:53.001327 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44584802-f67b-4ed4-b583-faf8ed9cf00b-catalog-content\") pod \"redhat-marketplace-8xks6\" (UID: \"44584802-f67b-4ed4-b583-faf8ed9cf00b\") " pod="openshift-marketplace/redhat-marketplace-8xks6" Feb 28 05:34:53 crc kubenswrapper[5014]: I0228 05:34:53.001692 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdv94\" (UniqueName: \"kubernetes.io/projected/44584802-f67b-4ed4-b583-faf8ed9cf00b-kube-api-access-zdv94\") pod \"redhat-marketplace-8xks6\" (UID: \"44584802-f67b-4ed4-b583-faf8ed9cf00b\") " pod="openshift-marketplace/redhat-marketplace-8xks6" Feb 28 05:34:53 crc kubenswrapper[5014]: I0228 05:34:53.001772 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44584802-f67b-4ed4-b583-faf8ed9cf00b-utilities\") pod \"redhat-marketplace-8xks6\" (UID: \"44584802-f67b-4ed4-b583-faf8ed9cf00b\") " pod="openshift-marketplace/redhat-marketplace-8xks6" Feb 28 05:34:53 crc kubenswrapper[5014]: I0228 05:34:53.103454 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44584802-f67b-4ed4-b583-faf8ed9cf00b-catalog-content\") pod \"redhat-marketplace-8xks6\" (UID: \"44584802-f67b-4ed4-b583-faf8ed9cf00b\") " pod="openshift-marketplace/redhat-marketplace-8xks6" Feb 28 05:34:53 crc kubenswrapper[5014]: I0228 05:34:53.103831 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdv94\" (UniqueName: \"kubernetes.io/projected/44584802-f67b-4ed4-b583-faf8ed9cf00b-kube-api-access-zdv94\") pod \"redhat-marketplace-8xks6\" (UID: \"44584802-f67b-4ed4-b583-faf8ed9cf00b\") " pod="openshift-marketplace/redhat-marketplace-8xks6" Feb 28 05:34:53 crc kubenswrapper[5014]: I0228 05:34:53.104011 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44584802-f67b-4ed4-b583-faf8ed9cf00b-utilities\") pod \"redhat-marketplace-8xks6\" (UID: \"44584802-f67b-4ed4-b583-faf8ed9cf00b\") " pod="openshift-marketplace/redhat-marketplace-8xks6" Feb 28 05:34:53 crc kubenswrapper[5014]: I0228 05:34:53.104043 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44584802-f67b-4ed4-b583-faf8ed9cf00b-catalog-content\") pod \"redhat-marketplace-8xks6\" (UID: \"44584802-f67b-4ed4-b583-faf8ed9cf00b\") " pod="openshift-marketplace/redhat-marketplace-8xks6" Feb 28 05:34:53 crc kubenswrapper[5014]: I0228 05:34:53.104414 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44584802-f67b-4ed4-b583-faf8ed9cf00b-utilities\") pod \"redhat-marketplace-8xks6\" (UID: \"44584802-f67b-4ed4-b583-faf8ed9cf00b\") " pod="openshift-marketplace/redhat-marketplace-8xks6" Feb 28 05:34:53 crc kubenswrapper[5014]: I0228 05:34:53.125943 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdv94\" (UniqueName: \"kubernetes.io/projected/44584802-f67b-4ed4-b583-faf8ed9cf00b-kube-api-access-zdv94\") pod \"redhat-marketplace-8xks6\" (UID: \"44584802-f67b-4ed4-b583-faf8ed9cf00b\") " pod="openshift-marketplace/redhat-marketplace-8xks6" Feb 28 05:34:53 crc kubenswrapper[5014]: I0228 05:34:53.229428 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8xks6" Feb 28 05:34:53 crc kubenswrapper[5014]: I0228 05:34:53.787377 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xks6"] Feb 28 05:34:53 crc kubenswrapper[5014]: W0228 05:34:53.789722 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44584802_f67b_4ed4_b583_faf8ed9cf00b.slice/crio-6591403e09bd60b79ddd4c8141b3566a5ac69b261f10f4ae4bf4428911581380 WatchSource:0}: Error finding container 6591403e09bd60b79ddd4c8141b3566a5ac69b261f10f4ae4bf4428911581380: Status 404 returned error can't find the container with id 6591403e09bd60b79ddd4c8141b3566a5ac69b261f10f4ae4bf4428911581380 Feb 28 05:34:54 crc kubenswrapper[5014]: I0228 05:34:54.085919 5014 generic.go:334] "Generic (PLEG): container finished" podID="44584802-f67b-4ed4-b583-faf8ed9cf00b" containerID="a2f795af7a677c0f2920b8b33c3c2355bbff7d5added8f82ed02b7f61860cddf" exitCode=0 Feb 28 05:34:54 crc kubenswrapper[5014]: I0228 05:34:54.085966 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xks6" event={"ID":"44584802-f67b-4ed4-b583-faf8ed9cf00b","Type":"ContainerDied","Data":"a2f795af7a677c0f2920b8b33c3c2355bbff7d5added8f82ed02b7f61860cddf"} Feb 28 05:34:54 crc kubenswrapper[5014]: I0228 05:34:54.085997 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xks6" event={"ID":"44584802-f67b-4ed4-b583-faf8ed9cf00b","Type":"ContainerStarted","Data":"6591403e09bd60b79ddd4c8141b3566a5ac69b261f10f4ae4bf4428911581380"} Feb 28 05:34:54 crc kubenswrapper[5014]: I0228 05:34:54.088329 5014 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 05:34:55 crc kubenswrapper[5014]: I0228 05:34:55.101156 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xks6" event={"ID":"44584802-f67b-4ed4-b583-faf8ed9cf00b","Type":"ContainerStarted","Data":"ee85d8283dc99e6e3a40635439f87af1f89183cb47354b8b50aaefbab2ebd5ff"} Feb 28 05:34:56 crc kubenswrapper[5014]: I0228 05:34:56.118853 5014 generic.go:334] "Generic (PLEG): container finished" podID="44584802-f67b-4ed4-b583-faf8ed9cf00b" containerID="ee85d8283dc99e6e3a40635439f87af1f89183cb47354b8b50aaefbab2ebd5ff" exitCode=0 Feb 28 05:34:56 crc kubenswrapper[5014]: I0228 05:34:56.119011 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xks6" event={"ID":"44584802-f67b-4ed4-b583-faf8ed9cf00b","Type":"ContainerDied","Data":"ee85d8283dc99e6e3a40635439f87af1f89183cb47354b8b50aaefbab2ebd5ff"} Feb 28 05:34:57 crc kubenswrapper[5014]: I0228 05:34:57.132785 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xks6" event={"ID":"44584802-f67b-4ed4-b583-faf8ed9cf00b","Type":"ContainerStarted","Data":"e9924ec37ee346186986570615c0f25f9d0cf2fb433aea4123fae4667dccd085"} Feb 28 05:34:57 crc kubenswrapper[5014]: I0228 05:34:57.163304 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8xks6" podStartSLOduration=2.681044836 podStartE2EDuration="5.163285621s" podCreationTimestamp="2026-02-28 05:34:52 +0000 UTC" firstStartedPulling="2026-02-28 05:34:54.088082508 +0000 UTC m=+3682.758208428" lastFinishedPulling="2026-02-28 05:34:56.570323303 +0000 UTC m=+3685.240449213" observedRunningTime="2026-02-28 05:34:57.16028391 +0000 UTC m=+3685.830409860" watchObservedRunningTime="2026-02-28 05:34:57.163285621 +0000 UTC m=+3685.833411541" Feb 28 05:34:59 crc kubenswrapper[5014]: I0228 05:34:59.501885 5014 scope.go:117] "RemoveContainer" containerID="bc388b30d32b496dabb8b065563b1d7641bf18b5022e7568453e4f3110ed1979" Feb 28 05:35:03 crc kubenswrapper[5014]: I0228 05:35:03.230284 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-8xks6" Feb 28 05:35:03 crc kubenswrapper[5014]: I0228 05:35:03.231940 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8xks6" Feb 28 05:35:03 crc kubenswrapper[5014]: I0228 05:35:03.308543 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8xks6" Feb 28 05:35:04 crc kubenswrapper[5014]: I0228 05:35:04.292698 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8xks6" Feb 28 05:35:04 crc kubenswrapper[5014]: I0228 05:35:04.355419 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xks6"] Feb 28 05:35:06 crc kubenswrapper[5014]: I0228 05:35:06.261431 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8xks6" podUID="44584802-f67b-4ed4-b583-faf8ed9cf00b" containerName="registry-server" containerID="cri-o://e9924ec37ee346186986570615c0f25f9d0cf2fb433aea4123fae4667dccd085" gracePeriod=2 Feb 28 05:35:06 crc kubenswrapper[5014]: I0228 05:35:06.770357 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8xks6" Feb 28 05:35:06 crc kubenswrapper[5014]: I0228 05:35:06.808284 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdv94\" (UniqueName: \"kubernetes.io/projected/44584802-f67b-4ed4-b583-faf8ed9cf00b-kube-api-access-zdv94\") pod \"44584802-f67b-4ed4-b583-faf8ed9cf00b\" (UID: \"44584802-f67b-4ed4-b583-faf8ed9cf00b\") " Feb 28 05:35:06 crc kubenswrapper[5014]: I0228 05:35:06.808569 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44584802-f67b-4ed4-b583-faf8ed9cf00b-utilities\") pod \"44584802-f67b-4ed4-b583-faf8ed9cf00b\" (UID: \"44584802-f67b-4ed4-b583-faf8ed9cf00b\") " Feb 28 05:35:06 crc kubenswrapper[5014]: I0228 05:35:06.808687 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44584802-f67b-4ed4-b583-faf8ed9cf00b-catalog-content\") pod \"44584802-f67b-4ed4-b583-faf8ed9cf00b\" (UID: \"44584802-f67b-4ed4-b583-faf8ed9cf00b\") " Feb 28 05:35:06 crc kubenswrapper[5014]: I0228 05:35:06.809445 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44584802-f67b-4ed4-b583-faf8ed9cf00b-utilities" (OuterVolumeSpecName: "utilities") pod "44584802-f67b-4ed4-b583-faf8ed9cf00b" (UID: "44584802-f67b-4ed4-b583-faf8ed9cf00b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:35:06 crc kubenswrapper[5014]: I0228 05:35:06.816335 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44584802-f67b-4ed4-b583-faf8ed9cf00b-kube-api-access-zdv94" (OuterVolumeSpecName: "kube-api-access-zdv94") pod "44584802-f67b-4ed4-b583-faf8ed9cf00b" (UID: "44584802-f67b-4ed4-b583-faf8ed9cf00b"). InnerVolumeSpecName "kube-api-access-zdv94". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:35:06 crc kubenswrapper[5014]: I0228 05:35:06.861488 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/44584802-f67b-4ed4-b583-faf8ed9cf00b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "44584802-f67b-4ed4-b583-faf8ed9cf00b" (UID: "44584802-f67b-4ed4-b583-faf8ed9cf00b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:35:06 crc kubenswrapper[5014]: I0228 05:35:06.911437 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/44584802-f67b-4ed4-b583-faf8ed9cf00b-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 05:35:06 crc kubenswrapper[5014]: I0228 05:35:06.911531 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/44584802-f67b-4ed4-b583-faf8ed9cf00b-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 05:35:06 crc kubenswrapper[5014]: I0228 05:35:06.912010 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdv94\" (UniqueName: \"kubernetes.io/projected/44584802-f67b-4ed4-b583-faf8ed9cf00b-kube-api-access-zdv94\") on node \"crc\" DevicePath \"\"" Feb 28 05:35:07 crc kubenswrapper[5014]: I0228 05:35:07.274273 5014 generic.go:334] "Generic (PLEG): container finished" podID="44584802-f67b-4ed4-b583-faf8ed9cf00b" containerID="e9924ec37ee346186986570615c0f25f9d0cf2fb433aea4123fae4667dccd085" exitCode=0 Feb 28 05:35:07 crc kubenswrapper[5014]: I0228 05:35:07.274310 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xks6" event={"ID":"44584802-f67b-4ed4-b583-faf8ed9cf00b","Type":"ContainerDied","Data":"e9924ec37ee346186986570615c0f25f9d0cf2fb433aea4123fae4667dccd085"} Feb 28 05:35:07 crc kubenswrapper[5014]: I0228 05:35:07.274340 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8xks6" event={"ID":"44584802-f67b-4ed4-b583-faf8ed9cf00b","Type":"ContainerDied","Data":"6591403e09bd60b79ddd4c8141b3566a5ac69b261f10f4ae4bf4428911581380"} Feb 28 05:35:07 crc kubenswrapper[5014]: I0228 05:35:07.274357 5014 scope.go:117] "RemoveContainer" containerID="e9924ec37ee346186986570615c0f25f9d0cf2fb433aea4123fae4667dccd085" Feb 28 05:35:07 crc kubenswrapper[5014]: I0228 05:35:07.274368 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8xks6" Feb 28 05:35:07 crc kubenswrapper[5014]: I0228 05:35:07.323308 5014 scope.go:117] "RemoveContainer" containerID="ee85d8283dc99e6e3a40635439f87af1f89183cb47354b8b50aaefbab2ebd5ff" Feb 28 05:35:07 crc kubenswrapper[5014]: I0228 05:35:07.357410 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xks6"] Feb 28 05:35:07 crc kubenswrapper[5014]: I0228 05:35:07.367651 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8xks6"] Feb 28 05:35:07 crc kubenswrapper[5014]: I0228 05:35:07.368249 5014 scope.go:117] "RemoveContainer" containerID="a2f795af7a677c0f2920b8b33c3c2355bbff7d5added8f82ed02b7f61860cddf" Feb 28 05:35:07 crc kubenswrapper[5014]: I0228 05:35:07.412921 5014 scope.go:117] "RemoveContainer" containerID="e9924ec37ee346186986570615c0f25f9d0cf2fb433aea4123fae4667dccd085" Feb 28 05:35:07 crc kubenswrapper[5014]: E0228 05:35:07.413503 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9924ec37ee346186986570615c0f25f9d0cf2fb433aea4123fae4667dccd085\": container with ID starting with e9924ec37ee346186986570615c0f25f9d0cf2fb433aea4123fae4667dccd085 not found: ID does not exist" containerID="e9924ec37ee346186986570615c0f25f9d0cf2fb433aea4123fae4667dccd085" Feb 28 05:35:07 crc kubenswrapper[5014]: I0228 05:35:07.413537 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9924ec37ee346186986570615c0f25f9d0cf2fb433aea4123fae4667dccd085"} err="failed to get container status \"e9924ec37ee346186986570615c0f25f9d0cf2fb433aea4123fae4667dccd085\": rpc error: code = NotFound desc = could not find container \"e9924ec37ee346186986570615c0f25f9d0cf2fb433aea4123fae4667dccd085\": container with ID starting with e9924ec37ee346186986570615c0f25f9d0cf2fb433aea4123fae4667dccd085 not found: ID does not exist" Feb 28 05:35:07 crc kubenswrapper[5014]: I0228 05:35:07.413557 5014 scope.go:117] "RemoveContainer" containerID="ee85d8283dc99e6e3a40635439f87af1f89183cb47354b8b50aaefbab2ebd5ff" Feb 28 05:35:07 crc kubenswrapper[5014]: E0228 05:35:07.413904 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee85d8283dc99e6e3a40635439f87af1f89183cb47354b8b50aaefbab2ebd5ff\": container with ID starting with ee85d8283dc99e6e3a40635439f87af1f89183cb47354b8b50aaefbab2ebd5ff not found: ID does not exist" containerID="ee85d8283dc99e6e3a40635439f87af1f89183cb47354b8b50aaefbab2ebd5ff" Feb 28 05:35:07 crc kubenswrapper[5014]: I0228 05:35:07.413965 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee85d8283dc99e6e3a40635439f87af1f89183cb47354b8b50aaefbab2ebd5ff"} err="failed to get container status \"ee85d8283dc99e6e3a40635439f87af1f89183cb47354b8b50aaefbab2ebd5ff\": rpc error: code = NotFound desc = could not find container \"ee85d8283dc99e6e3a40635439f87af1f89183cb47354b8b50aaefbab2ebd5ff\": container with ID starting with ee85d8283dc99e6e3a40635439f87af1f89183cb47354b8b50aaefbab2ebd5ff not found: ID does not exist" Feb 28 05:35:07 crc kubenswrapper[5014]: I0228 05:35:07.413995 5014 scope.go:117] "RemoveContainer" containerID="a2f795af7a677c0f2920b8b33c3c2355bbff7d5added8f82ed02b7f61860cddf" Feb 28 05:35:07 crc kubenswrapper[5014]: E0228 05:35:07.414326 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2f795af7a677c0f2920b8b33c3c2355bbff7d5added8f82ed02b7f61860cddf\": container with ID starting with a2f795af7a677c0f2920b8b33c3c2355bbff7d5added8f82ed02b7f61860cddf not found: ID does not exist" containerID="a2f795af7a677c0f2920b8b33c3c2355bbff7d5added8f82ed02b7f61860cddf" Feb 28 05:35:07 crc kubenswrapper[5014]: I0228 05:35:07.414358 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2f795af7a677c0f2920b8b33c3c2355bbff7d5added8f82ed02b7f61860cddf"} err="failed to get container status \"a2f795af7a677c0f2920b8b33c3c2355bbff7d5added8f82ed02b7f61860cddf\": rpc error: code = NotFound desc = could not find container \"a2f795af7a677c0f2920b8b33c3c2355bbff7d5added8f82ed02b7f61860cddf\": container with ID starting with a2f795af7a677c0f2920b8b33c3c2355bbff7d5added8f82ed02b7f61860cddf not found: ID does not exist" Feb 28 05:35:08 crc kubenswrapper[5014]: I0228 05:35:08.190760 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44584802-f67b-4ed4-b583-faf8ed9cf00b" path="/var/lib/kubelet/pods/44584802-f67b-4ed4-b583-faf8ed9cf00b/volumes" Feb 28 05:35:21 crc kubenswrapper[5014]: I0228 05:35:21.427593 5014 generic.go:334] "Generic (PLEG): container finished" podID="45f9a71c-0aad-4b18-97d8-fd99506da883" containerID="3084376352eb2b0687c356ce3d8809c084dc9f9c290a662b0d6a13b269867986" exitCode=0 Feb 28 05:35:21 crc kubenswrapper[5014]: I0228 05:35:21.427689 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8292t/must-gather-2pvrl" event={"ID":"45f9a71c-0aad-4b18-97d8-fd99506da883","Type":"ContainerDied","Data":"3084376352eb2b0687c356ce3d8809c084dc9f9c290a662b0d6a13b269867986"} Feb 28 05:35:21 crc kubenswrapper[5014]: I0228 05:35:21.429037 5014 scope.go:117] "RemoveContainer" containerID="3084376352eb2b0687c356ce3d8809c084dc9f9c290a662b0d6a13b269867986" Feb 28 05:35:22 crc kubenswrapper[5014]: I0228 05:35:22.238613 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8292t_must-gather-2pvrl_45f9a71c-0aad-4b18-97d8-fd99506da883/gather/0.log" Feb 28 05:35:30 crc kubenswrapper[5014]: I0228 05:35:30.474341 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8292t/must-gather-2pvrl"] Feb 28 05:35:30 crc kubenswrapper[5014]: I0228 05:35:30.475507 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-8292t/must-gather-2pvrl" podUID="45f9a71c-0aad-4b18-97d8-fd99506da883" containerName="copy" containerID="cri-o://7889b55b1d2e3a305aff044340afb37a9ce4090e0a94bd24a1c5342b9c5fa54f" gracePeriod=2 Feb 28 05:35:30 crc kubenswrapper[5014]: I0228 05:35:30.487041 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8292t/must-gather-2pvrl"] Feb 28 05:35:31 crc kubenswrapper[5014]: I0228 05:35:31.003758 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8292t_must-gather-2pvrl_45f9a71c-0aad-4b18-97d8-fd99506da883/copy/0.log" Feb 28 05:35:31 crc kubenswrapper[5014]: I0228 05:35:31.004449 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8292t/must-gather-2pvrl" Feb 28 05:35:31 crc kubenswrapper[5014]: I0228 05:35:31.145228 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/45f9a71c-0aad-4b18-97d8-fd99506da883-must-gather-output\") pod \"45f9a71c-0aad-4b18-97d8-fd99506da883\" (UID: \"45f9a71c-0aad-4b18-97d8-fd99506da883\") " Feb 28 05:35:31 crc kubenswrapper[5014]: I0228 05:35:31.145433 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56tl6\" (UniqueName: \"kubernetes.io/projected/45f9a71c-0aad-4b18-97d8-fd99506da883-kube-api-access-56tl6\") pod \"45f9a71c-0aad-4b18-97d8-fd99506da883\" (UID: \"45f9a71c-0aad-4b18-97d8-fd99506da883\") " Feb 28 05:35:31 crc kubenswrapper[5014]: I0228 05:35:31.167004 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45f9a71c-0aad-4b18-97d8-fd99506da883-kube-api-access-56tl6" (OuterVolumeSpecName: "kube-api-access-56tl6") pod "45f9a71c-0aad-4b18-97d8-fd99506da883" (UID: "45f9a71c-0aad-4b18-97d8-fd99506da883"). InnerVolumeSpecName "kube-api-access-56tl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:35:31 crc kubenswrapper[5014]: I0228 05:35:31.247106 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56tl6\" (UniqueName: \"kubernetes.io/projected/45f9a71c-0aad-4b18-97d8-fd99506da883-kube-api-access-56tl6\") on node \"crc\" DevicePath \"\"" Feb 28 05:35:31 crc kubenswrapper[5014]: I0228 05:35:31.340860 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45f9a71c-0aad-4b18-97d8-fd99506da883-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "45f9a71c-0aad-4b18-97d8-fd99506da883" (UID: "45f9a71c-0aad-4b18-97d8-fd99506da883"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:35:31 crc kubenswrapper[5014]: I0228 05:35:31.348583 5014 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/45f9a71c-0aad-4b18-97d8-fd99506da883-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 28 05:35:31 crc kubenswrapper[5014]: I0228 05:35:31.537753 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8292t_must-gather-2pvrl_45f9a71c-0aad-4b18-97d8-fd99506da883/copy/0.log" Feb 28 05:35:31 crc kubenswrapper[5014]: I0228 05:35:31.538228 5014 generic.go:334] "Generic (PLEG): container finished" podID="45f9a71c-0aad-4b18-97d8-fd99506da883" containerID="7889b55b1d2e3a305aff044340afb37a9ce4090e0a94bd24a1c5342b9c5fa54f" exitCode=143 Feb 28 05:35:31 crc kubenswrapper[5014]: I0228 05:35:31.538284 5014 scope.go:117] "RemoveContainer" containerID="7889b55b1d2e3a305aff044340afb37a9ce4090e0a94bd24a1c5342b9c5fa54f" Feb 28 05:35:31 crc kubenswrapper[5014]: I0228 05:35:31.538346 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8292t/must-gather-2pvrl" Feb 28 05:35:31 crc kubenswrapper[5014]: I0228 05:35:31.555727 5014 scope.go:117] "RemoveContainer" containerID="3084376352eb2b0687c356ce3d8809c084dc9f9c290a662b0d6a13b269867986" Feb 28 05:35:31 crc kubenswrapper[5014]: I0228 05:35:31.633078 5014 scope.go:117] "RemoveContainer" containerID="7889b55b1d2e3a305aff044340afb37a9ce4090e0a94bd24a1c5342b9c5fa54f" Feb 28 05:35:31 crc kubenswrapper[5014]: E0228 05:35:31.633701 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7889b55b1d2e3a305aff044340afb37a9ce4090e0a94bd24a1c5342b9c5fa54f\": container with ID starting with 7889b55b1d2e3a305aff044340afb37a9ce4090e0a94bd24a1c5342b9c5fa54f not found: ID does not exist" containerID="7889b55b1d2e3a305aff044340afb37a9ce4090e0a94bd24a1c5342b9c5fa54f" Feb 28 05:35:31 crc kubenswrapper[5014]: I0228 05:35:31.633745 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7889b55b1d2e3a305aff044340afb37a9ce4090e0a94bd24a1c5342b9c5fa54f"} err="failed to get container status \"7889b55b1d2e3a305aff044340afb37a9ce4090e0a94bd24a1c5342b9c5fa54f\": rpc error: code = NotFound desc = could not find container \"7889b55b1d2e3a305aff044340afb37a9ce4090e0a94bd24a1c5342b9c5fa54f\": container with ID starting with 7889b55b1d2e3a305aff044340afb37a9ce4090e0a94bd24a1c5342b9c5fa54f not found: ID does not exist" Feb 28 05:35:31 crc kubenswrapper[5014]: I0228 05:35:31.633773 5014 scope.go:117] "RemoveContainer" containerID="3084376352eb2b0687c356ce3d8809c084dc9f9c290a662b0d6a13b269867986" Feb 28 05:35:31 crc kubenswrapper[5014]: E0228 05:35:31.634322 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3084376352eb2b0687c356ce3d8809c084dc9f9c290a662b0d6a13b269867986\": container with ID starting with 3084376352eb2b0687c356ce3d8809c084dc9f9c290a662b0d6a13b269867986 not found: ID does not exist" containerID="3084376352eb2b0687c356ce3d8809c084dc9f9c290a662b0d6a13b269867986" Feb 28 05:35:31 crc kubenswrapper[5014]: I0228 05:35:31.634363 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3084376352eb2b0687c356ce3d8809c084dc9f9c290a662b0d6a13b269867986"} err="failed to get container status \"3084376352eb2b0687c356ce3d8809c084dc9f9c290a662b0d6a13b269867986\": rpc error: code = NotFound desc = could not find container \"3084376352eb2b0687c356ce3d8809c084dc9f9c290a662b0d6a13b269867986\": container with ID starting with 3084376352eb2b0687c356ce3d8809c084dc9f9c290a662b0d6a13b269867986 not found: ID does not exist" Feb 28 05:35:32 crc kubenswrapper[5014]: I0228 05:35:32.224438 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45f9a71c-0aad-4b18-97d8-fd99506da883" path="/var/lib/kubelet/pods/45f9a71c-0aad-4b18-97d8-fd99506da883/volumes" Feb 28 05:36:00 crc kubenswrapper[5014]: I0228 05:36:00.182986 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537616-4smtl"] Feb 28 05:36:00 crc kubenswrapper[5014]: E0228 05:36:00.183933 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45f9a71c-0aad-4b18-97d8-fd99506da883" containerName="gather" Feb 28 05:36:00 crc kubenswrapper[5014]: I0228 05:36:00.183949 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f9a71c-0aad-4b18-97d8-fd99506da883" containerName="gather" Feb 28 05:36:00 crc kubenswrapper[5014]: E0228 05:36:00.183966 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44584802-f67b-4ed4-b583-faf8ed9cf00b" containerName="registry-server" Feb 28 05:36:00 crc kubenswrapper[5014]: I0228 05:36:00.183974 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="44584802-f67b-4ed4-b583-faf8ed9cf00b" containerName="registry-server" Feb 28 05:36:00 crc kubenswrapper[5014]: E0228 05:36:00.183991 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44584802-f67b-4ed4-b583-faf8ed9cf00b" containerName="extract-utilities" Feb 28 05:36:00 crc kubenswrapper[5014]: I0228 05:36:00.183998 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="44584802-f67b-4ed4-b583-faf8ed9cf00b" containerName="extract-utilities" Feb 28 05:36:00 crc kubenswrapper[5014]: E0228 05:36:00.184010 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44584802-f67b-4ed4-b583-faf8ed9cf00b" containerName="extract-content" Feb 28 05:36:00 crc kubenswrapper[5014]: I0228 05:36:00.184018 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="44584802-f67b-4ed4-b583-faf8ed9cf00b" containerName="extract-content" Feb 28 05:36:00 crc kubenswrapper[5014]: E0228 05:36:00.184039 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45f9a71c-0aad-4b18-97d8-fd99506da883" containerName="copy" Feb 28 05:36:00 crc kubenswrapper[5014]: I0228 05:36:00.184046 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="45f9a71c-0aad-4b18-97d8-fd99506da883" containerName="copy" Feb 28 05:36:00 crc kubenswrapper[5014]: I0228 05:36:00.184262 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="44584802-f67b-4ed4-b583-faf8ed9cf00b" containerName="registry-server" Feb 28 05:36:00 crc kubenswrapper[5014]: I0228 05:36:00.184276 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="45f9a71c-0aad-4b18-97d8-fd99506da883" containerName="copy" Feb 28 05:36:00 crc kubenswrapper[5014]: I0228 05:36:00.184297 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="45f9a71c-0aad-4b18-97d8-fd99506da883" containerName="gather" Feb 28 05:36:00 crc kubenswrapper[5014]: I0228 05:36:00.185140 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537616-4smtl" Feb 28 05:36:00 crc kubenswrapper[5014]: I0228 05:36:00.190301 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:36:00 crc kubenswrapper[5014]: I0228 05:36:00.190449 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:36:00 crc kubenswrapper[5014]: I0228 05:36:00.190579 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:36:00 crc kubenswrapper[5014]: I0228 05:36:00.198056 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537616-4smtl"] Feb 28 05:36:00 crc kubenswrapper[5014]: I0228 05:36:00.258994 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rg5g7\" (UniqueName: \"kubernetes.io/projected/0c64a676-9036-492e-a5e4-6201694188a8-kube-api-access-rg5g7\") pod \"auto-csr-approver-29537616-4smtl\" (UID: \"0c64a676-9036-492e-a5e4-6201694188a8\") " pod="openshift-infra/auto-csr-approver-29537616-4smtl" Feb 28 05:36:00 crc kubenswrapper[5014]: I0228 05:36:00.361125 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rg5g7\" (UniqueName: \"kubernetes.io/projected/0c64a676-9036-492e-a5e4-6201694188a8-kube-api-access-rg5g7\") pod \"auto-csr-approver-29537616-4smtl\" (UID: \"0c64a676-9036-492e-a5e4-6201694188a8\") " pod="openshift-infra/auto-csr-approver-29537616-4smtl" Feb 28 05:36:00 crc kubenswrapper[5014]: I0228 05:36:00.404621 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rg5g7\" (UniqueName: \"kubernetes.io/projected/0c64a676-9036-492e-a5e4-6201694188a8-kube-api-access-rg5g7\") pod \"auto-csr-approver-29537616-4smtl\" (UID: \"0c64a676-9036-492e-a5e4-6201694188a8\") " pod="openshift-infra/auto-csr-approver-29537616-4smtl" Feb 28 05:36:00 crc kubenswrapper[5014]: I0228 05:36:00.538074 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537616-4smtl" Feb 28 05:36:00 crc kubenswrapper[5014]: I0228 05:36:00.787042 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537616-4smtl"] Feb 28 05:36:00 crc kubenswrapper[5014]: W0228 05:36:00.803094 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c64a676_9036_492e_a5e4_6201694188a8.slice/crio-c55c5284335173e538af74fddbb4a93fb8ae6b3e45ef63d67395e44f112f51cb WatchSource:0}: Error finding container c55c5284335173e538af74fddbb4a93fb8ae6b3e45ef63d67395e44f112f51cb: Status 404 returned error can't find the container with id c55c5284335173e538af74fddbb4a93fb8ae6b3e45ef63d67395e44f112f51cb Feb 28 05:36:00 crc kubenswrapper[5014]: I0228 05:36:00.824942 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537616-4smtl" event={"ID":"0c64a676-9036-492e-a5e4-6201694188a8","Type":"ContainerStarted","Data":"c55c5284335173e538af74fddbb4a93fb8ae6b3e45ef63d67395e44f112f51cb"} Feb 28 05:36:02 crc kubenswrapper[5014]: I0228 05:36:02.847503 5014 generic.go:334] "Generic (PLEG): container finished" podID="0c64a676-9036-492e-a5e4-6201694188a8" containerID="3af1ef9dfa30f5b77d845a0f5f1aa838cdc400d09ee4408493bbb2670d7b4cad" exitCode=0 Feb 28 05:36:02 crc kubenswrapper[5014]: I0228 05:36:02.848506 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537616-4smtl" event={"ID":"0c64a676-9036-492e-a5e4-6201694188a8","Type":"ContainerDied","Data":"3af1ef9dfa30f5b77d845a0f5f1aa838cdc400d09ee4408493bbb2670d7b4cad"} Feb 28 05:36:04 crc kubenswrapper[5014]: I0228 05:36:04.343071 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537616-4smtl" Feb 28 05:36:04 crc kubenswrapper[5014]: I0228 05:36:04.449983 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rg5g7\" (UniqueName: \"kubernetes.io/projected/0c64a676-9036-492e-a5e4-6201694188a8-kube-api-access-rg5g7\") pod \"0c64a676-9036-492e-a5e4-6201694188a8\" (UID: \"0c64a676-9036-492e-a5e4-6201694188a8\") " Feb 28 05:36:04 crc kubenswrapper[5014]: I0228 05:36:04.461143 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c64a676-9036-492e-a5e4-6201694188a8-kube-api-access-rg5g7" (OuterVolumeSpecName: "kube-api-access-rg5g7") pod "0c64a676-9036-492e-a5e4-6201694188a8" (UID: "0c64a676-9036-492e-a5e4-6201694188a8"). InnerVolumeSpecName "kube-api-access-rg5g7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:36:04 crc kubenswrapper[5014]: I0228 05:36:04.552698 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rg5g7\" (UniqueName: \"kubernetes.io/projected/0c64a676-9036-492e-a5e4-6201694188a8-kube-api-access-rg5g7\") on node \"crc\" DevicePath \"\"" Feb 28 05:36:04 crc kubenswrapper[5014]: I0228 05:36:04.902864 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537616-4smtl" event={"ID":"0c64a676-9036-492e-a5e4-6201694188a8","Type":"ContainerDied","Data":"c55c5284335173e538af74fddbb4a93fb8ae6b3e45ef63d67395e44f112f51cb"} Feb 28 05:36:04 crc kubenswrapper[5014]: I0228 05:36:04.902901 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c55c5284335173e538af74fddbb4a93fb8ae6b3e45ef63d67395e44f112f51cb" Feb 28 05:36:04 crc kubenswrapper[5014]: I0228 05:36:04.902975 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537616-4smtl" Feb 28 05:36:05 crc kubenswrapper[5014]: I0228 05:36:05.439713 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537610-wvrbh"] Feb 28 05:36:05 crc kubenswrapper[5014]: I0228 05:36:05.453518 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537610-wvrbh"] Feb 28 05:36:06 crc kubenswrapper[5014]: I0228 05:36:06.185035 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b81bdff3-95bd-45c0-912d-524b27981fd5" path="/var/lib/kubelet/pods/b81bdff3-95bd-45c0-912d-524b27981fd5/volumes" Feb 28 05:36:15 crc kubenswrapper[5014]: I0228 05:36:15.706415 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:36:15 crc kubenswrapper[5014]: I0228 05:36:15.707098 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:36:45 crc kubenswrapper[5014]: I0228 05:36:45.707127 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:36:45 crc kubenswrapper[5014]: I0228 05:36:45.707766 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:36:59 crc kubenswrapper[5014]: I0228 05:36:59.675534 5014 scope.go:117] "RemoveContainer" containerID="56692b4c1bc0176a424a9e82eb23a903f167eb23b29da5c8ba9d67719d8ae16d" Feb 28 05:37:15 crc kubenswrapper[5014]: I0228 05:37:15.707214 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:37:15 crc kubenswrapper[5014]: I0228 05:37:15.707929 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:37:15 crc kubenswrapper[5014]: I0228 05:37:15.707996 5014 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 05:37:15 crc kubenswrapper[5014]: I0228 05:37:15.709472 5014 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19"} pod="openshift-machine-config-operator/machine-config-daemon-cct62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 05:37:15 crc kubenswrapper[5014]: I0228 05:37:15.709630 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" containerID="cri-o://40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" gracePeriod=600 Feb 28 05:37:15 crc kubenswrapper[5014]: E0228 05:37:15.875773 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:37:16 crc kubenswrapper[5014]: I0228 05:37:16.732585 5014 generic.go:334] "Generic (PLEG): container finished" podID="6aad0009-d904-48f8-8e30-82205907ece1" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" exitCode=0 Feb 28 05:37:16 crc kubenswrapper[5014]: I0228 05:37:16.732646 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerDied","Data":"40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19"} Feb 28 05:37:16 crc kubenswrapper[5014]: I0228 05:37:16.732692 5014 scope.go:117] "RemoveContainer" containerID="ab09a27e103d3311268f3f8870f394e7de849ef8d8bdc4ab745c03fb930a3cfb" Feb 28 05:37:16 crc kubenswrapper[5014]: I0228 05:37:16.733583 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:37:16 crc kubenswrapper[5014]: E0228 05:37:16.734067 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:37:28 crc kubenswrapper[5014]: I0228 05:37:28.174072 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:37:28 crc kubenswrapper[5014]: E0228 05:37:28.175318 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:37:42 crc kubenswrapper[5014]: I0228 05:37:42.188513 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:37:42 crc kubenswrapper[5014]: E0228 05:37:42.195595 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:37:50 crc kubenswrapper[5014]: I0228 05:37:50.323785 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n55sn"] Feb 28 05:37:50 crc kubenswrapper[5014]: E0228 05:37:50.325395 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c64a676-9036-492e-a5e4-6201694188a8" containerName="oc" Feb 28 05:37:50 crc kubenswrapper[5014]: I0228 05:37:50.325427 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c64a676-9036-492e-a5e4-6201694188a8" containerName="oc" Feb 28 05:37:50 crc kubenswrapper[5014]: I0228 05:37:50.325970 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c64a676-9036-492e-a5e4-6201694188a8" containerName="oc" Feb 28 05:37:50 crc kubenswrapper[5014]: I0228 05:37:50.329137 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n55sn" Feb 28 05:37:50 crc kubenswrapper[5014]: I0228 05:37:50.352563 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n55sn"] Feb 28 05:37:50 crc kubenswrapper[5014]: I0228 05:37:50.520949 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31d3239-840f-4df0-a19f-7294594f05fa-utilities\") pod \"redhat-operators-n55sn\" (UID: \"c31d3239-840f-4df0-a19f-7294594f05fa\") " pod="openshift-marketplace/redhat-operators-n55sn" Feb 28 05:37:50 crc kubenswrapper[5014]: I0228 05:37:50.521106 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2xwb\" (UniqueName: \"kubernetes.io/projected/c31d3239-840f-4df0-a19f-7294594f05fa-kube-api-access-q2xwb\") pod \"redhat-operators-n55sn\" (UID: \"c31d3239-840f-4df0-a19f-7294594f05fa\") " pod="openshift-marketplace/redhat-operators-n55sn" Feb 28 05:37:50 crc kubenswrapper[5014]: I0228 05:37:50.521348 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31d3239-840f-4df0-a19f-7294594f05fa-catalog-content\") pod \"redhat-operators-n55sn\" (UID: \"c31d3239-840f-4df0-a19f-7294594f05fa\") " pod="openshift-marketplace/redhat-operators-n55sn" Feb 28 05:37:50 crc kubenswrapper[5014]: I0228 05:37:50.623307 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31d3239-840f-4df0-a19f-7294594f05fa-catalog-content\") pod \"redhat-operators-n55sn\" (UID: \"c31d3239-840f-4df0-a19f-7294594f05fa\") " pod="openshift-marketplace/redhat-operators-n55sn" Feb 28 05:37:50 crc kubenswrapper[5014]: I0228 05:37:50.623388 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31d3239-840f-4df0-a19f-7294594f05fa-utilities\") pod \"redhat-operators-n55sn\" (UID: \"c31d3239-840f-4df0-a19f-7294594f05fa\") " pod="openshift-marketplace/redhat-operators-n55sn" Feb 28 05:37:50 crc kubenswrapper[5014]: I0228 05:37:50.623457 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2xwb\" (UniqueName: \"kubernetes.io/projected/c31d3239-840f-4df0-a19f-7294594f05fa-kube-api-access-q2xwb\") pod \"redhat-operators-n55sn\" (UID: \"c31d3239-840f-4df0-a19f-7294594f05fa\") " pod="openshift-marketplace/redhat-operators-n55sn" Feb 28 05:37:50 crc kubenswrapper[5014]: I0228 05:37:50.623727 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31d3239-840f-4df0-a19f-7294594f05fa-catalog-content\") pod \"redhat-operators-n55sn\" (UID: \"c31d3239-840f-4df0-a19f-7294594f05fa\") " pod="openshift-marketplace/redhat-operators-n55sn" Feb 28 05:37:50 crc kubenswrapper[5014]: I0228 05:37:50.623977 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31d3239-840f-4df0-a19f-7294594f05fa-utilities\") pod \"redhat-operators-n55sn\" (UID: \"c31d3239-840f-4df0-a19f-7294594f05fa\") " pod="openshift-marketplace/redhat-operators-n55sn" Feb 28 05:37:50 crc kubenswrapper[5014]: I0228 05:37:50.642780 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2xwb\" (UniqueName: \"kubernetes.io/projected/c31d3239-840f-4df0-a19f-7294594f05fa-kube-api-access-q2xwb\") pod \"redhat-operators-n55sn\" (UID: \"c31d3239-840f-4df0-a19f-7294594f05fa\") " pod="openshift-marketplace/redhat-operators-n55sn" Feb 28 05:37:50 crc kubenswrapper[5014]: I0228 05:37:50.662966 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n55sn" Feb 28 05:37:51 crc kubenswrapper[5014]: I0228 05:37:51.141913 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n55sn"] Feb 28 05:37:52 crc kubenswrapper[5014]: I0228 05:37:52.116122 5014 generic.go:334] "Generic (PLEG): container finished" podID="c31d3239-840f-4df0-a19f-7294594f05fa" containerID="689fbc54cee1f0ba57682cefae6a51c5b8ab3e57998952a2075d7bbe3ada480d" exitCode=0 Feb 28 05:37:52 crc kubenswrapper[5014]: I0228 05:37:52.116240 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n55sn" event={"ID":"c31d3239-840f-4df0-a19f-7294594f05fa","Type":"ContainerDied","Data":"689fbc54cee1f0ba57682cefae6a51c5b8ab3e57998952a2075d7bbe3ada480d"} Feb 28 05:37:52 crc kubenswrapper[5014]: I0228 05:37:52.116475 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n55sn" event={"ID":"c31d3239-840f-4df0-a19f-7294594f05fa","Type":"ContainerStarted","Data":"457b85812c33ee3b80613a60c70a10043ce1230be876d735f72c07af87709119"} Feb 28 05:37:53 crc kubenswrapper[5014]: I0228 05:37:53.128827 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n55sn" event={"ID":"c31d3239-840f-4df0-a19f-7294594f05fa","Type":"ContainerStarted","Data":"4409e57bc8502d2f0c92252de8117502ab9f652b6f1ca5eb9398c2087c399ceb"} Feb 28 05:37:54 crc kubenswrapper[5014]: I0228 05:37:54.143854 5014 generic.go:334] "Generic (PLEG): container finished" podID="c31d3239-840f-4df0-a19f-7294594f05fa" containerID="4409e57bc8502d2f0c92252de8117502ab9f652b6f1ca5eb9398c2087c399ceb" exitCode=0 Feb 28 05:37:54 crc kubenswrapper[5014]: I0228 05:37:54.144016 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n55sn" event={"ID":"c31d3239-840f-4df0-a19f-7294594f05fa","Type":"ContainerDied","Data":"4409e57bc8502d2f0c92252de8117502ab9f652b6f1ca5eb9398c2087c399ceb"} Feb 28 05:37:54 crc kubenswrapper[5014]: I0228 05:37:54.178346 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:37:54 crc kubenswrapper[5014]: E0228 05:37:54.179590 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:37:55 crc kubenswrapper[5014]: I0228 05:37:55.159175 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n55sn" event={"ID":"c31d3239-840f-4df0-a19f-7294594f05fa","Type":"ContainerStarted","Data":"308a0b8429183f45e192b9987617f6ac91a7e2c88b71c86ecbe16b803bd9a8c0"} Feb 28 05:37:55 crc kubenswrapper[5014]: I0228 05:37:55.200497 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n55sn" podStartSLOduration=2.769853971 podStartE2EDuration="5.20046785s" podCreationTimestamp="2026-02-28 05:37:50 +0000 UTC" firstStartedPulling="2026-02-28 05:37:52.119109161 +0000 UTC m=+3860.789235111" lastFinishedPulling="2026-02-28 05:37:54.54972307 +0000 UTC m=+3863.219848990" observedRunningTime="2026-02-28 05:37:55.190743218 +0000 UTC m=+3863.860869138" watchObservedRunningTime="2026-02-28 05:37:55.20046785 +0000 UTC m=+3863.870593810" Feb 28 05:38:00 crc kubenswrapper[5014]: I0228 05:38:00.162943 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537618-htkqd"] Feb 28 05:38:00 crc kubenswrapper[5014]: I0228 05:38:00.165774 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537618-htkqd" Feb 28 05:38:00 crc kubenswrapper[5014]: I0228 05:38:00.168505 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:38:00 crc kubenswrapper[5014]: I0228 05:38:00.169181 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:38:00 crc kubenswrapper[5014]: I0228 05:38:00.174326 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:38:00 crc kubenswrapper[5014]: I0228 05:38:00.186115 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537618-htkqd"] Feb 28 05:38:00 crc kubenswrapper[5014]: I0228 05:38:00.323583 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn2h9\" (UniqueName: \"kubernetes.io/projected/676dc806-f1a9-4fb3-b811-7c015c0f62c1-kube-api-access-sn2h9\") pod \"auto-csr-approver-29537618-htkqd\" (UID: \"676dc806-f1a9-4fb3-b811-7c015c0f62c1\") " pod="openshift-infra/auto-csr-approver-29537618-htkqd" Feb 28 05:38:00 crc kubenswrapper[5014]: I0228 05:38:00.425394 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sn2h9\" (UniqueName: \"kubernetes.io/projected/676dc806-f1a9-4fb3-b811-7c015c0f62c1-kube-api-access-sn2h9\") pod \"auto-csr-approver-29537618-htkqd\" (UID: \"676dc806-f1a9-4fb3-b811-7c015c0f62c1\") " pod="openshift-infra/auto-csr-approver-29537618-htkqd" Feb 28 05:38:00 crc kubenswrapper[5014]: I0228 05:38:00.459173 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sn2h9\" (UniqueName: \"kubernetes.io/projected/676dc806-f1a9-4fb3-b811-7c015c0f62c1-kube-api-access-sn2h9\") pod \"auto-csr-approver-29537618-htkqd\" (UID: \"676dc806-f1a9-4fb3-b811-7c015c0f62c1\") " pod="openshift-infra/auto-csr-approver-29537618-htkqd" Feb 28 05:38:00 crc kubenswrapper[5014]: I0228 05:38:00.510638 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537618-htkqd" Feb 28 05:38:00 crc kubenswrapper[5014]: I0228 05:38:00.663247 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n55sn" Feb 28 05:38:00 crc kubenswrapper[5014]: I0228 05:38:00.663509 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n55sn" Feb 28 05:38:01 crc kubenswrapper[5014]: I0228 05:38:01.075246 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537618-htkqd"] Feb 28 05:38:01 crc kubenswrapper[5014]: I0228 05:38:01.227508 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537618-htkqd" event={"ID":"676dc806-f1a9-4fb3-b811-7c015c0f62c1","Type":"ContainerStarted","Data":"af80e74a28e7f61bfc3aab29457c53acc768ed64c8a29b5e4910f75a5336d372"} Feb 28 05:38:01 crc kubenswrapper[5014]: I0228 05:38:01.724667 5014 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n55sn" podUID="c31d3239-840f-4df0-a19f-7294594f05fa" containerName="registry-server" probeResult="failure" output=< Feb 28 05:38:01 crc kubenswrapper[5014]: timeout: failed to connect service ":50051" within 1s Feb 28 05:38:01 crc kubenswrapper[5014]: > Feb 28 05:38:03 crc kubenswrapper[5014]: I0228 05:38:03.250602 5014 generic.go:334] "Generic (PLEG): container finished" podID="676dc806-f1a9-4fb3-b811-7c015c0f62c1" containerID="bf9307b0ee9fd74f9f90d7441d91d4c9ee1f67b59133aeeb713d65bcc6ebb064" exitCode=0 Feb 28 05:38:03 crc kubenswrapper[5014]: I0228 05:38:03.251030 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537618-htkqd" event={"ID":"676dc806-f1a9-4fb3-b811-7c015c0f62c1","Type":"ContainerDied","Data":"bf9307b0ee9fd74f9f90d7441d91d4c9ee1f67b59133aeeb713d65bcc6ebb064"} Feb 28 05:38:04 crc kubenswrapper[5014]: I0228 05:38:04.669424 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537618-htkqd" Feb 28 05:38:04 crc kubenswrapper[5014]: I0228 05:38:04.848233 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sn2h9\" (UniqueName: \"kubernetes.io/projected/676dc806-f1a9-4fb3-b811-7c015c0f62c1-kube-api-access-sn2h9\") pod \"676dc806-f1a9-4fb3-b811-7c015c0f62c1\" (UID: \"676dc806-f1a9-4fb3-b811-7c015c0f62c1\") " Feb 28 05:38:04 crc kubenswrapper[5014]: I0228 05:38:04.860256 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/676dc806-f1a9-4fb3-b811-7c015c0f62c1-kube-api-access-sn2h9" (OuterVolumeSpecName: "kube-api-access-sn2h9") pod "676dc806-f1a9-4fb3-b811-7c015c0f62c1" (UID: "676dc806-f1a9-4fb3-b811-7c015c0f62c1"). InnerVolumeSpecName "kube-api-access-sn2h9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:38:04 crc kubenswrapper[5014]: I0228 05:38:04.950983 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sn2h9\" (UniqueName: \"kubernetes.io/projected/676dc806-f1a9-4fb3-b811-7c015c0f62c1-kube-api-access-sn2h9\") on node \"crc\" DevicePath \"\"" Feb 28 05:38:05 crc kubenswrapper[5014]: I0228 05:38:05.282191 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537618-htkqd" event={"ID":"676dc806-f1a9-4fb3-b811-7c015c0f62c1","Type":"ContainerDied","Data":"af80e74a28e7f61bfc3aab29457c53acc768ed64c8a29b5e4910f75a5336d372"} Feb 28 05:38:05 crc kubenswrapper[5014]: I0228 05:38:05.282237 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af80e74a28e7f61bfc3aab29457c53acc768ed64c8a29b5e4910f75a5336d372" Feb 28 05:38:05 crc kubenswrapper[5014]: I0228 05:38:05.282288 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537618-htkqd" Feb 28 05:38:05 crc kubenswrapper[5014]: I0228 05:38:05.768676 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537612-jf9rb"] Feb 28 05:38:05 crc kubenswrapper[5014]: I0228 05:38:05.787122 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537612-jf9rb"] Feb 28 05:38:06 crc kubenswrapper[5014]: I0228 05:38:06.192873 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfd07fbb-9d7f-44c6-9d7f-2ba4d4b859b0" path="/var/lib/kubelet/pods/cfd07fbb-9d7f-44c6-9d7f-2ba4d4b859b0/volumes" Feb 28 05:38:07 crc kubenswrapper[5014]: I0228 05:38:07.173508 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:38:07 crc kubenswrapper[5014]: E0228 05:38:07.174428 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:38:10 crc kubenswrapper[5014]: I0228 05:38:10.750382 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n55sn" Feb 28 05:38:10 crc kubenswrapper[5014]: I0228 05:38:10.825536 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n55sn" Feb 28 05:38:10 crc kubenswrapper[5014]: I0228 05:38:10.990473 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n55sn"] Feb 28 05:38:12 crc kubenswrapper[5014]: I0228 05:38:12.363453 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n55sn" podUID="c31d3239-840f-4df0-a19f-7294594f05fa" containerName="registry-server" containerID="cri-o://308a0b8429183f45e192b9987617f6ac91a7e2c88b71c86ecbe16b803bd9a8c0" gracePeriod=2 Feb 28 05:38:12 crc kubenswrapper[5014]: I0228 05:38:12.827851 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n55sn" Feb 28 05:38:12 crc kubenswrapper[5014]: I0228 05:38:12.931576 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31d3239-840f-4df0-a19f-7294594f05fa-utilities\") pod \"c31d3239-840f-4df0-a19f-7294594f05fa\" (UID: \"c31d3239-840f-4df0-a19f-7294594f05fa\") " Feb 28 05:38:12 crc kubenswrapper[5014]: I0228 05:38:12.931712 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2xwb\" (UniqueName: \"kubernetes.io/projected/c31d3239-840f-4df0-a19f-7294594f05fa-kube-api-access-q2xwb\") pod \"c31d3239-840f-4df0-a19f-7294594f05fa\" (UID: \"c31d3239-840f-4df0-a19f-7294594f05fa\") " Feb 28 05:38:12 crc kubenswrapper[5014]: I0228 05:38:12.931828 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31d3239-840f-4df0-a19f-7294594f05fa-catalog-content\") pod \"c31d3239-840f-4df0-a19f-7294594f05fa\" (UID: \"c31d3239-840f-4df0-a19f-7294594f05fa\") " Feb 28 05:38:12 crc kubenswrapper[5014]: I0228 05:38:12.932312 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c31d3239-840f-4df0-a19f-7294594f05fa-utilities" (OuterVolumeSpecName: "utilities") pod "c31d3239-840f-4df0-a19f-7294594f05fa" (UID: "c31d3239-840f-4df0-a19f-7294594f05fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:38:12 crc kubenswrapper[5014]: I0228 05:38:12.937473 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c31d3239-840f-4df0-a19f-7294594f05fa-kube-api-access-q2xwb" (OuterVolumeSpecName: "kube-api-access-q2xwb") pod "c31d3239-840f-4df0-a19f-7294594f05fa" (UID: "c31d3239-840f-4df0-a19f-7294594f05fa"). InnerVolumeSpecName "kube-api-access-q2xwb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:38:13 crc kubenswrapper[5014]: I0228 05:38:13.034393 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c31d3239-840f-4df0-a19f-7294594f05fa-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 05:38:13 crc kubenswrapper[5014]: I0228 05:38:13.034420 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2xwb\" (UniqueName: \"kubernetes.io/projected/c31d3239-840f-4df0-a19f-7294594f05fa-kube-api-access-q2xwb\") on node \"crc\" DevicePath \"\"" Feb 28 05:38:13 crc kubenswrapper[5014]: I0228 05:38:13.073312 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c31d3239-840f-4df0-a19f-7294594f05fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c31d3239-840f-4df0-a19f-7294594f05fa" (UID: "c31d3239-840f-4df0-a19f-7294594f05fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:38:13 crc kubenswrapper[5014]: I0228 05:38:13.135887 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c31d3239-840f-4df0-a19f-7294594f05fa-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 05:38:13 crc kubenswrapper[5014]: I0228 05:38:13.377917 5014 generic.go:334] "Generic (PLEG): container finished" podID="c31d3239-840f-4df0-a19f-7294594f05fa" containerID="308a0b8429183f45e192b9987617f6ac91a7e2c88b71c86ecbe16b803bd9a8c0" exitCode=0 Feb 28 05:38:13 crc kubenswrapper[5014]: I0228 05:38:13.377980 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n55sn" event={"ID":"c31d3239-840f-4df0-a19f-7294594f05fa","Type":"ContainerDied","Data":"308a0b8429183f45e192b9987617f6ac91a7e2c88b71c86ecbe16b803bd9a8c0"} Feb 28 05:38:13 crc kubenswrapper[5014]: I0228 05:38:13.377990 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n55sn" Feb 28 05:38:13 crc kubenswrapper[5014]: I0228 05:38:13.378029 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n55sn" event={"ID":"c31d3239-840f-4df0-a19f-7294594f05fa","Type":"ContainerDied","Data":"457b85812c33ee3b80613a60c70a10043ce1230be876d735f72c07af87709119"} Feb 28 05:38:13 crc kubenswrapper[5014]: I0228 05:38:13.378060 5014 scope.go:117] "RemoveContainer" containerID="308a0b8429183f45e192b9987617f6ac91a7e2c88b71c86ecbe16b803bd9a8c0" Feb 28 05:38:13 crc kubenswrapper[5014]: I0228 05:38:13.421411 5014 scope.go:117] "RemoveContainer" containerID="4409e57bc8502d2f0c92252de8117502ab9f652b6f1ca5eb9398c2087c399ceb" Feb 28 05:38:13 crc kubenswrapper[5014]: I0228 05:38:13.440422 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n55sn"] Feb 28 05:38:13 crc kubenswrapper[5014]: I0228 05:38:13.455716 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n55sn"] Feb 28 05:38:13 crc kubenswrapper[5014]: I0228 05:38:13.459382 5014 scope.go:117] "RemoveContainer" containerID="689fbc54cee1f0ba57682cefae6a51c5b8ab3e57998952a2075d7bbe3ada480d" Feb 28 05:38:13 crc kubenswrapper[5014]: I0228 05:38:13.512870 5014 scope.go:117] "RemoveContainer" containerID="308a0b8429183f45e192b9987617f6ac91a7e2c88b71c86ecbe16b803bd9a8c0" Feb 28 05:38:13 crc kubenswrapper[5014]: E0228 05:38:13.513766 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"308a0b8429183f45e192b9987617f6ac91a7e2c88b71c86ecbe16b803bd9a8c0\": container with ID starting with 308a0b8429183f45e192b9987617f6ac91a7e2c88b71c86ecbe16b803bd9a8c0 not found: ID does not exist" containerID="308a0b8429183f45e192b9987617f6ac91a7e2c88b71c86ecbe16b803bd9a8c0" Feb 28 05:38:13 crc kubenswrapper[5014]: I0228 05:38:13.513865 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"308a0b8429183f45e192b9987617f6ac91a7e2c88b71c86ecbe16b803bd9a8c0"} err="failed to get container status \"308a0b8429183f45e192b9987617f6ac91a7e2c88b71c86ecbe16b803bd9a8c0\": rpc error: code = NotFound desc = could not find container \"308a0b8429183f45e192b9987617f6ac91a7e2c88b71c86ecbe16b803bd9a8c0\": container with ID starting with 308a0b8429183f45e192b9987617f6ac91a7e2c88b71c86ecbe16b803bd9a8c0 not found: ID does not exist" Feb 28 05:38:13 crc kubenswrapper[5014]: I0228 05:38:13.513908 5014 scope.go:117] "RemoveContainer" containerID="4409e57bc8502d2f0c92252de8117502ab9f652b6f1ca5eb9398c2087c399ceb" Feb 28 05:38:13 crc kubenswrapper[5014]: E0228 05:38:13.514357 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4409e57bc8502d2f0c92252de8117502ab9f652b6f1ca5eb9398c2087c399ceb\": container with ID starting with 4409e57bc8502d2f0c92252de8117502ab9f652b6f1ca5eb9398c2087c399ceb not found: ID does not exist" containerID="4409e57bc8502d2f0c92252de8117502ab9f652b6f1ca5eb9398c2087c399ceb" Feb 28 05:38:13 crc kubenswrapper[5014]: I0228 05:38:13.514409 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4409e57bc8502d2f0c92252de8117502ab9f652b6f1ca5eb9398c2087c399ceb"} err="failed to get container status \"4409e57bc8502d2f0c92252de8117502ab9f652b6f1ca5eb9398c2087c399ceb\": rpc error: code = NotFound desc = could not find container \"4409e57bc8502d2f0c92252de8117502ab9f652b6f1ca5eb9398c2087c399ceb\": container with ID starting with 4409e57bc8502d2f0c92252de8117502ab9f652b6f1ca5eb9398c2087c399ceb not found: ID does not exist" Feb 28 05:38:13 crc kubenswrapper[5014]: I0228 05:38:13.514448 5014 scope.go:117] "RemoveContainer" containerID="689fbc54cee1f0ba57682cefae6a51c5b8ab3e57998952a2075d7bbe3ada480d" Feb 28 05:38:13 crc kubenswrapper[5014]: E0228 05:38:13.514827 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"689fbc54cee1f0ba57682cefae6a51c5b8ab3e57998952a2075d7bbe3ada480d\": container with ID starting with 689fbc54cee1f0ba57682cefae6a51c5b8ab3e57998952a2075d7bbe3ada480d not found: ID does not exist" containerID="689fbc54cee1f0ba57682cefae6a51c5b8ab3e57998952a2075d7bbe3ada480d" Feb 28 05:38:13 crc kubenswrapper[5014]: I0228 05:38:13.514874 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"689fbc54cee1f0ba57682cefae6a51c5b8ab3e57998952a2075d7bbe3ada480d"} err="failed to get container status \"689fbc54cee1f0ba57682cefae6a51c5b8ab3e57998952a2075d7bbe3ada480d\": rpc error: code = NotFound desc = could not find container \"689fbc54cee1f0ba57682cefae6a51c5b8ab3e57998952a2075d7bbe3ada480d\": container with ID starting with 689fbc54cee1f0ba57682cefae6a51c5b8ab3e57998952a2075d7bbe3ada480d not found: ID does not exist" Feb 28 05:38:14 crc kubenswrapper[5014]: I0228 05:38:14.191419 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c31d3239-840f-4df0-a19f-7294594f05fa" path="/var/lib/kubelet/pods/c31d3239-840f-4df0-a19f-7294594f05fa/volumes" Feb 28 05:38:18 crc kubenswrapper[5014]: I0228 05:38:18.173296 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:38:18 crc kubenswrapper[5014]: E0228 05:38:18.174297 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:38:26 crc kubenswrapper[5014]: I0228 05:38:26.124586 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v9fbc/must-gather-t6xkn"] Feb 28 05:38:26 crc kubenswrapper[5014]: E0228 05:38:26.125489 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="676dc806-f1a9-4fb3-b811-7c015c0f62c1" containerName="oc" Feb 28 05:38:26 crc kubenswrapper[5014]: I0228 05:38:26.125501 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="676dc806-f1a9-4fb3-b811-7c015c0f62c1" containerName="oc" Feb 28 05:38:26 crc kubenswrapper[5014]: E0228 05:38:26.125523 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c31d3239-840f-4df0-a19f-7294594f05fa" containerName="registry-server" Feb 28 05:38:26 crc kubenswrapper[5014]: I0228 05:38:26.125529 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="c31d3239-840f-4df0-a19f-7294594f05fa" containerName="registry-server" Feb 28 05:38:26 crc kubenswrapper[5014]: E0228 05:38:26.125535 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c31d3239-840f-4df0-a19f-7294594f05fa" containerName="extract-content" Feb 28 05:38:26 crc kubenswrapper[5014]: I0228 05:38:26.125541 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="c31d3239-840f-4df0-a19f-7294594f05fa" containerName="extract-content" Feb 28 05:38:26 crc kubenswrapper[5014]: E0228 05:38:26.125553 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c31d3239-840f-4df0-a19f-7294594f05fa" containerName="extract-utilities" Feb 28 05:38:26 crc kubenswrapper[5014]: I0228 05:38:26.125558 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="c31d3239-840f-4df0-a19f-7294594f05fa" containerName="extract-utilities" Feb 28 05:38:26 crc kubenswrapper[5014]: I0228 05:38:26.125718 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="676dc806-f1a9-4fb3-b811-7c015c0f62c1" containerName="oc" Feb 28 05:38:26 crc kubenswrapper[5014]: I0228 05:38:26.125728 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="c31d3239-840f-4df0-a19f-7294594f05fa" containerName="registry-server" Feb 28 05:38:26 crc kubenswrapper[5014]: I0228 05:38:26.126691 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v9fbc/must-gather-t6xkn" Feb 28 05:38:26 crc kubenswrapper[5014]: I0228 05:38:26.128580 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-v9fbc"/"default-dockercfg-9k5v8" Feb 28 05:38:26 crc kubenswrapper[5014]: I0228 05:38:26.128709 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v9fbc"/"kube-root-ca.crt" Feb 28 05:38:26 crc kubenswrapper[5014]: I0228 05:38:26.129492 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-v9fbc"/"openshift-service-ca.crt" Feb 28 05:38:26 crc kubenswrapper[5014]: I0228 05:38:26.133634 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v9fbc/must-gather-t6xkn"] Feb 28 05:38:26 crc kubenswrapper[5014]: I0228 05:38:26.151933 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bab79593-07cf-4a70-881f-fa06508b63af-must-gather-output\") pod \"must-gather-t6xkn\" (UID: \"bab79593-07cf-4a70-881f-fa06508b63af\") " pod="openshift-must-gather-v9fbc/must-gather-t6xkn" Feb 28 05:38:26 crc kubenswrapper[5014]: I0228 05:38:26.152078 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmng4\" (UniqueName: \"kubernetes.io/projected/bab79593-07cf-4a70-881f-fa06508b63af-kube-api-access-pmng4\") pod \"must-gather-t6xkn\" (UID: \"bab79593-07cf-4a70-881f-fa06508b63af\") " pod="openshift-must-gather-v9fbc/must-gather-t6xkn" Feb 28 05:38:26 crc kubenswrapper[5014]: I0228 05:38:26.253062 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bab79593-07cf-4a70-881f-fa06508b63af-must-gather-output\") pod \"must-gather-t6xkn\" (UID: \"bab79593-07cf-4a70-881f-fa06508b63af\") " pod="openshift-must-gather-v9fbc/must-gather-t6xkn" Feb 28 05:38:26 crc kubenswrapper[5014]: I0228 05:38:26.253192 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmng4\" (UniqueName: \"kubernetes.io/projected/bab79593-07cf-4a70-881f-fa06508b63af-kube-api-access-pmng4\") pod \"must-gather-t6xkn\" (UID: \"bab79593-07cf-4a70-881f-fa06508b63af\") " pod="openshift-must-gather-v9fbc/must-gather-t6xkn" Feb 28 05:38:26 crc kubenswrapper[5014]: I0228 05:38:26.254011 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bab79593-07cf-4a70-881f-fa06508b63af-must-gather-output\") pod \"must-gather-t6xkn\" (UID: \"bab79593-07cf-4a70-881f-fa06508b63af\") " pod="openshift-must-gather-v9fbc/must-gather-t6xkn" Feb 28 05:38:26 crc kubenswrapper[5014]: I0228 05:38:26.273361 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmng4\" (UniqueName: \"kubernetes.io/projected/bab79593-07cf-4a70-881f-fa06508b63af-kube-api-access-pmng4\") pod \"must-gather-t6xkn\" (UID: \"bab79593-07cf-4a70-881f-fa06508b63af\") " pod="openshift-must-gather-v9fbc/must-gather-t6xkn" Feb 28 05:38:26 crc kubenswrapper[5014]: I0228 05:38:26.444523 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v9fbc/must-gather-t6xkn" Feb 28 05:38:27 crc kubenswrapper[5014]: I0228 05:38:27.097888 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-v9fbc/must-gather-t6xkn"] Feb 28 05:38:27 crc kubenswrapper[5014]: W0228 05:38:27.108287 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbab79593_07cf_4a70_881f_fa06508b63af.slice/crio-86b11a5e8efc9c80fa9f8c9eae62a24006c824b833cab5aa7001997884d8726e WatchSource:0}: Error finding container 86b11a5e8efc9c80fa9f8c9eae62a24006c824b833cab5aa7001997884d8726e: Status 404 returned error can't find the container with id 86b11a5e8efc9c80fa9f8c9eae62a24006c824b833cab5aa7001997884d8726e Feb 28 05:38:27 crc kubenswrapper[5014]: I0228 05:38:27.575173 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v9fbc/must-gather-t6xkn" event={"ID":"bab79593-07cf-4a70-881f-fa06508b63af","Type":"ContainerStarted","Data":"6ce7a1813ea157c99d1cc6b62f15e8aefcbc8b50711ca43702e2931f45693b2b"} Feb 28 05:38:27 crc kubenswrapper[5014]: I0228 05:38:27.575596 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v9fbc/must-gather-t6xkn" event={"ID":"bab79593-07cf-4a70-881f-fa06508b63af","Type":"ContainerStarted","Data":"86b11a5e8efc9c80fa9f8c9eae62a24006c824b833cab5aa7001997884d8726e"} Feb 28 05:38:28 crc kubenswrapper[5014]: I0228 05:38:28.588161 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v9fbc/must-gather-t6xkn" event={"ID":"bab79593-07cf-4a70-881f-fa06508b63af","Type":"ContainerStarted","Data":"71f919f5f21a08514dddbc6c7c83574591e2eb8367456ead9ede8c091975270d"} Feb 28 05:38:30 crc kubenswrapper[5014]: I0228 05:38:30.172695 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:38:30 crc kubenswrapper[5014]: E0228 05:38:30.173172 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:38:31 crc kubenswrapper[5014]: I0228 05:38:31.019571 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v9fbc/must-gather-t6xkn" podStartSLOduration=5.019555348 podStartE2EDuration="5.019555348s" podCreationTimestamp="2026-02-28 05:38:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 05:38:28.6181858 +0000 UTC m=+3897.288311740" watchObservedRunningTime="2026-02-28 05:38:31.019555348 +0000 UTC m=+3899.689681258" Feb 28 05:38:31 crc kubenswrapper[5014]: I0228 05:38:31.022707 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v9fbc/crc-debug-bl8sk"] Feb 28 05:38:31 crc kubenswrapper[5014]: I0228 05:38:31.023758 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v9fbc/crc-debug-bl8sk" Feb 28 05:38:31 crc kubenswrapper[5014]: I0228 05:38:31.154381 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fddhz\" (UniqueName: \"kubernetes.io/projected/45149232-7346-449f-936d-ce88cb95b1f9-kube-api-access-fddhz\") pod \"crc-debug-bl8sk\" (UID: \"45149232-7346-449f-936d-ce88cb95b1f9\") " pod="openshift-must-gather-v9fbc/crc-debug-bl8sk" Feb 28 05:38:31 crc kubenswrapper[5014]: I0228 05:38:31.155077 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45149232-7346-449f-936d-ce88cb95b1f9-host\") pod \"crc-debug-bl8sk\" (UID: \"45149232-7346-449f-936d-ce88cb95b1f9\") " pod="openshift-must-gather-v9fbc/crc-debug-bl8sk" Feb 28 05:38:31 crc kubenswrapper[5014]: I0228 05:38:31.256978 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fddhz\" (UniqueName: \"kubernetes.io/projected/45149232-7346-449f-936d-ce88cb95b1f9-kube-api-access-fddhz\") pod \"crc-debug-bl8sk\" (UID: \"45149232-7346-449f-936d-ce88cb95b1f9\") " pod="openshift-must-gather-v9fbc/crc-debug-bl8sk" Feb 28 05:38:31 crc kubenswrapper[5014]: I0228 05:38:31.257103 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45149232-7346-449f-936d-ce88cb95b1f9-host\") pod \"crc-debug-bl8sk\" (UID: \"45149232-7346-449f-936d-ce88cb95b1f9\") " pod="openshift-must-gather-v9fbc/crc-debug-bl8sk" Feb 28 05:38:31 crc kubenswrapper[5014]: I0228 05:38:31.257461 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45149232-7346-449f-936d-ce88cb95b1f9-host\") pod \"crc-debug-bl8sk\" (UID: \"45149232-7346-449f-936d-ce88cb95b1f9\") " pod="openshift-must-gather-v9fbc/crc-debug-bl8sk" Feb 28 05:38:31 crc kubenswrapper[5014]: I0228 05:38:31.282534 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fddhz\" (UniqueName: \"kubernetes.io/projected/45149232-7346-449f-936d-ce88cb95b1f9-kube-api-access-fddhz\") pod \"crc-debug-bl8sk\" (UID: \"45149232-7346-449f-936d-ce88cb95b1f9\") " pod="openshift-must-gather-v9fbc/crc-debug-bl8sk" Feb 28 05:38:31 crc kubenswrapper[5014]: I0228 05:38:31.340713 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v9fbc/crc-debug-bl8sk" Feb 28 05:38:31 crc kubenswrapper[5014]: I0228 05:38:31.625294 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v9fbc/crc-debug-bl8sk" event={"ID":"45149232-7346-449f-936d-ce88cb95b1f9","Type":"ContainerStarted","Data":"cd339e5ee8260b301c1cd9a20074ab5353e59f5abfd17205673d816755222684"} Feb 28 05:38:31 crc kubenswrapper[5014]: I0228 05:38:31.640537 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v9fbc/crc-debug-bl8sk" podStartSLOduration=0.640519363 podStartE2EDuration="640.519363ms" podCreationTimestamp="2026-02-28 05:38:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 05:38:31.637749908 +0000 UTC m=+3900.307875828" watchObservedRunningTime="2026-02-28 05:38:31.640519363 +0000 UTC m=+3900.310645283" Feb 28 05:38:32 crc kubenswrapper[5014]: I0228 05:38:32.636977 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v9fbc/crc-debug-bl8sk" event={"ID":"45149232-7346-449f-936d-ce88cb95b1f9","Type":"ContainerStarted","Data":"998f94e384cbec622704cdfc575364575f72ad4c48bead1520d240184b32957a"} Feb 28 05:38:43 crc kubenswrapper[5014]: I0228 05:38:43.171678 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:38:43 crc kubenswrapper[5014]: E0228 05:38:43.172332 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:38:55 crc kubenswrapper[5014]: I0228 05:38:55.172399 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:38:55 crc kubenswrapper[5014]: E0228 05:38:55.173200 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:38:59 crc kubenswrapper[5014]: I0228 05:38:59.840399 5014 scope.go:117] "RemoveContainer" containerID="fdf791af0089a7da1458fbdf1340d8e7b595d33219f633590233ed4472fc0d05" Feb 28 05:39:03 crc kubenswrapper[5014]: I0228 05:39:03.324655 5014 generic.go:334] "Generic (PLEG): container finished" podID="45149232-7346-449f-936d-ce88cb95b1f9" containerID="998f94e384cbec622704cdfc575364575f72ad4c48bead1520d240184b32957a" exitCode=0 Feb 28 05:39:03 crc kubenswrapper[5014]: I0228 05:39:03.324892 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v9fbc/crc-debug-bl8sk" event={"ID":"45149232-7346-449f-936d-ce88cb95b1f9","Type":"ContainerDied","Data":"998f94e384cbec622704cdfc575364575f72ad4c48bead1520d240184b32957a"} Feb 28 05:39:04 crc kubenswrapper[5014]: I0228 05:39:04.438748 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v9fbc/crc-debug-bl8sk" Feb 28 05:39:04 crc kubenswrapper[5014]: I0228 05:39:04.497492 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v9fbc/crc-debug-bl8sk"] Feb 28 05:39:04 crc kubenswrapper[5014]: I0228 05:39:04.507912 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45149232-7346-449f-936d-ce88cb95b1f9-host\") pod \"45149232-7346-449f-936d-ce88cb95b1f9\" (UID: \"45149232-7346-449f-936d-ce88cb95b1f9\") " Feb 28 05:39:04 crc kubenswrapper[5014]: I0228 05:39:04.508017 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45149232-7346-449f-936d-ce88cb95b1f9-host" (OuterVolumeSpecName: "host") pod "45149232-7346-449f-936d-ce88cb95b1f9" (UID: "45149232-7346-449f-936d-ce88cb95b1f9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 05:39:04 crc kubenswrapper[5014]: I0228 05:39:04.508152 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fddhz\" (UniqueName: \"kubernetes.io/projected/45149232-7346-449f-936d-ce88cb95b1f9-kube-api-access-fddhz\") pod \"45149232-7346-449f-936d-ce88cb95b1f9\" (UID: \"45149232-7346-449f-936d-ce88cb95b1f9\") " Feb 28 05:39:04 crc kubenswrapper[5014]: I0228 05:39:04.508777 5014 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/45149232-7346-449f-936d-ce88cb95b1f9-host\") on node \"crc\" DevicePath \"\"" Feb 28 05:39:04 crc kubenswrapper[5014]: I0228 05:39:04.510651 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v9fbc/crc-debug-bl8sk"] Feb 28 05:39:04 crc kubenswrapper[5014]: I0228 05:39:04.515658 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45149232-7346-449f-936d-ce88cb95b1f9-kube-api-access-fddhz" (OuterVolumeSpecName: "kube-api-access-fddhz") pod "45149232-7346-449f-936d-ce88cb95b1f9" (UID: "45149232-7346-449f-936d-ce88cb95b1f9"). InnerVolumeSpecName "kube-api-access-fddhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:39:04 crc kubenswrapper[5014]: I0228 05:39:04.612147 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fddhz\" (UniqueName: \"kubernetes.io/projected/45149232-7346-449f-936d-ce88cb95b1f9-kube-api-access-fddhz\") on node \"crc\" DevicePath \"\"" Feb 28 05:39:05 crc kubenswrapper[5014]: I0228 05:39:05.359244 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd339e5ee8260b301c1cd9a20074ab5353e59f5abfd17205673d816755222684" Feb 28 05:39:05 crc kubenswrapper[5014]: I0228 05:39:05.359368 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v9fbc/crc-debug-bl8sk" Feb 28 05:39:05 crc kubenswrapper[5014]: I0228 05:39:05.703622 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v9fbc/crc-debug-gkckq"] Feb 28 05:39:05 crc kubenswrapper[5014]: E0228 05:39:05.704032 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45149232-7346-449f-936d-ce88cb95b1f9" containerName="container-00" Feb 28 05:39:05 crc kubenswrapper[5014]: I0228 05:39:05.704046 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="45149232-7346-449f-936d-ce88cb95b1f9" containerName="container-00" Feb 28 05:39:05 crc kubenswrapper[5014]: I0228 05:39:05.704211 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="45149232-7346-449f-936d-ce88cb95b1f9" containerName="container-00" Feb 28 05:39:05 crc kubenswrapper[5014]: I0228 05:39:05.704788 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v9fbc/crc-debug-gkckq" Feb 28 05:39:05 crc kubenswrapper[5014]: I0228 05:39:05.754229 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bll9h\" (UniqueName: \"kubernetes.io/projected/b28f394b-cc6e-42e4-9e10-02efad4276f1-kube-api-access-bll9h\") pod \"crc-debug-gkckq\" (UID: \"b28f394b-cc6e-42e4-9e10-02efad4276f1\") " pod="openshift-must-gather-v9fbc/crc-debug-gkckq" Feb 28 05:39:05 crc kubenswrapper[5014]: I0228 05:39:05.754505 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b28f394b-cc6e-42e4-9e10-02efad4276f1-host\") pod \"crc-debug-gkckq\" (UID: \"b28f394b-cc6e-42e4-9e10-02efad4276f1\") " pod="openshift-must-gather-v9fbc/crc-debug-gkckq" Feb 28 05:39:05 crc kubenswrapper[5014]: I0228 05:39:05.856298 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b28f394b-cc6e-42e4-9e10-02efad4276f1-host\") pod \"crc-debug-gkckq\" (UID: \"b28f394b-cc6e-42e4-9e10-02efad4276f1\") " pod="openshift-must-gather-v9fbc/crc-debug-gkckq" Feb 28 05:39:05 crc kubenswrapper[5014]: I0228 05:39:05.856465 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b28f394b-cc6e-42e4-9e10-02efad4276f1-host\") pod \"crc-debug-gkckq\" (UID: \"b28f394b-cc6e-42e4-9e10-02efad4276f1\") " pod="openshift-must-gather-v9fbc/crc-debug-gkckq" Feb 28 05:39:05 crc kubenswrapper[5014]: I0228 05:39:05.856471 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bll9h\" (UniqueName: \"kubernetes.io/projected/b28f394b-cc6e-42e4-9e10-02efad4276f1-kube-api-access-bll9h\") pod \"crc-debug-gkckq\" (UID: \"b28f394b-cc6e-42e4-9e10-02efad4276f1\") " pod="openshift-must-gather-v9fbc/crc-debug-gkckq" Feb 28 05:39:05 crc kubenswrapper[5014]: I0228 05:39:05.877983 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bll9h\" (UniqueName: \"kubernetes.io/projected/b28f394b-cc6e-42e4-9e10-02efad4276f1-kube-api-access-bll9h\") pod \"crc-debug-gkckq\" (UID: \"b28f394b-cc6e-42e4-9e10-02efad4276f1\") " pod="openshift-must-gather-v9fbc/crc-debug-gkckq" Feb 28 05:39:06 crc kubenswrapper[5014]: I0228 05:39:06.020825 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v9fbc/crc-debug-gkckq" Feb 28 05:39:06 crc kubenswrapper[5014]: I0228 05:39:06.183522 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45149232-7346-449f-936d-ce88cb95b1f9" path="/var/lib/kubelet/pods/45149232-7346-449f-936d-ce88cb95b1f9/volumes" Feb 28 05:39:06 crc kubenswrapper[5014]: I0228 05:39:06.370497 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v9fbc/crc-debug-gkckq" event={"ID":"b28f394b-cc6e-42e4-9e10-02efad4276f1","Type":"ContainerStarted","Data":"d17cc5e46448a11b318b7f0bcbef86f6b14109e4154153176a104be7edecabd3"} Feb 28 05:39:06 crc kubenswrapper[5014]: I0228 05:39:06.370540 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v9fbc/crc-debug-gkckq" event={"ID":"b28f394b-cc6e-42e4-9e10-02efad4276f1","Type":"ContainerStarted","Data":"8330f0c39b3d81ecdb9d991eb9a7c4c1ed23ce05b7c5f7ae91e31e9399a9c5f7"} Feb 28 05:39:06 crc kubenswrapper[5014]: I0228 05:39:06.390258 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-v9fbc/crc-debug-gkckq" podStartSLOduration=1.390240478 podStartE2EDuration="1.390240478s" podCreationTimestamp="2026-02-28 05:39:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 05:39:06.38510017 +0000 UTC m=+3935.055226080" watchObservedRunningTime="2026-02-28 05:39:06.390240478 +0000 UTC m=+3935.060366388" Feb 28 05:39:07 crc kubenswrapper[5014]: I0228 05:39:07.172793 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:39:07 crc kubenswrapper[5014]: E0228 05:39:07.173560 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:39:07 crc kubenswrapper[5014]: I0228 05:39:07.382551 5014 generic.go:334] "Generic (PLEG): container finished" podID="b28f394b-cc6e-42e4-9e10-02efad4276f1" containerID="d17cc5e46448a11b318b7f0bcbef86f6b14109e4154153176a104be7edecabd3" exitCode=0 Feb 28 05:39:07 crc kubenswrapper[5014]: I0228 05:39:07.382681 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v9fbc/crc-debug-gkckq" event={"ID":"b28f394b-cc6e-42e4-9e10-02efad4276f1","Type":"ContainerDied","Data":"d17cc5e46448a11b318b7f0bcbef86f6b14109e4154153176a104be7edecabd3"} Feb 28 05:39:08 crc kubenswrapper[5014]: I0228 05:39:08.525873 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v9fbc/crc-debug-gkckq" Feb 28 05:39:08 crc kubenswrapper[5014]: I0228 05:39:08.594014 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v9fbc/crc-debug-gkckq"] Feb 28 05:39:08 crc kubenswrapper[5014]: I0228 05:39:08.604128 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v9fbc/crc-debug-gkckq"] Feb 28 05:39:08 crc kubenswrapper[5014]: I0228 05:39:08.721151 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b28f394b-cc6e-42e4-9e10-02efad4276f1-host\") pod \"b28f394b-cc6e-42e4-9e10-02efad4276f1\" (UID: \"b28f394b-cc6e-42e4-9e10-02efad4276f1\") " Feb 28 05:39:08 crc kubenswrapper[5014]: I0228 05:39:08.721316 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bll9h\" (UniqueName: \"kubernetes.io/projected/b28f394b-cc6e-42e4-9e10-02efad4276f1-kube-api-access-bll9h\") pod \"b28f394b-cc6e-42e4-9e10-02efad4276f1\" (UID: \"b28f394b-cc6e-42e4-9e10-02efad4276f1\") " Feb 28 05:39:08 crc kubenswrapper[5014]: I0228 05:39:08.721323 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b28f394b-cc6e-42e4-9e10-02efad4276f1-host" (OuterVolumeSpecName: "host") pod "b28f394b-cc6e-42e4-9e10-02efad4276f1" (UID: "b28f394b-cc6e-42e4-9e10-02efad4276f1"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 05:39:08 crc kubenswrapper[5014]: I0228 05:39:08.721841 5014 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b28f394b-cc6e-42e4-9e10-02efad4276f1-host\") on node \"crc\" DevicePath \"\"" Feb 28 05:39:08 crc kubenswrapper[5014]: I0228 05:39:08.727229 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b28f394b-cc6e-42e4-9e10-02efad4276f1-kube-api-access-bll9h" (OuterVolumeSpecName: "kube-api-access-bll9h") pod "b28f394b-cc6e-42e4-9e10-02efad4276f1" (UID: "b28f394b-cc6e-42e4-9e10-02efad4276f1"). InnerVolumeSpecName "kube-api-access-bll9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:39:08 crc kubenswrapper[5014]: I0228 05:39:08.823717 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bll9h\" (UniqueName: \"kubernetes.io/projected/b28f394b-cc6e-42e4-9e10-02efad4276f1-kube-api-access-bll9h\") on node \"crc\" DevicePath \"\"" Feb 28 05:39:09 crc kubenswrapper[5014]: I0228 05:39:09.435159 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8330f0c39b3d81ecdb9d991eb9a7c4c1ed23ce05b7c5f7ae91e31e9399a9c5f7" Feb 28 05:39:09 crc kubenswrapper[5014]: I0228 05:39:09.435204 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v9fbc/crc-debug-gkckq" Feb 28 05:39:09 crc kubenswrapper[5014]: I0228 05:39:09.783058 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-v9fbc/crc-debug-ts552"] Feb 28 05:39:09 crc kubenswrapper[5014]: E0228 05:39:09.783538 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b28f394b-cc6e-42e4-9e10-02efad4276f1" containerName="container-00" Feb 28 05:39:09 crc kubenswrapper[5014]: I0228 05:39:09.783556 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="b28f394b-cc6e-42e4-9e10-02efad4276f1" containerName="container-00" Feb 28 05:39:09 crc kubenswrapper[5014]: I0228 05:39:09.783813 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="b28f394b-cc6e-42e4-9e10-02efad4276f1" containerName="container-00" Feb 28 05:39:09 crc kubenswrapper[5014]: I0228 05:39:09.785234 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v9fbc/crc-debug-ts552" Feb 28 05:39:09 crc kubenswrapper[5014]: I0228 05:39:09.941919 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q4k9\" (UniqueName: \"kubernetes.io/projected/89df7552-1d43-46ef-875a-ca5411e102c5-kube-api-access-9q4k9\") pod \"crc-debug-ts552\" (UID: \"89df7552-1d43-46ef-875a-ca5411e102c5\") " pod="openshift-must-gather-v9fbc/crc-debug-ts552" Feb 28 05:39:09 crc kubenswrapper[5014]: I0228 05:39:09.941990 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89df7552-1d43-46ef-875a-ca5411e102c5-host\") pod \"crc-debug-ts552\" (UID: \"89df7552-1d43-46ef-875a-ca5411e102c5\") " pod="openshift-must-gather-v9fbc/crc-debug-ts552" Feb 28 05:39:10 crc kubenswrapper[5014]: I0228 05:39:10.044282 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q4k9\" (UniqueName: \"kubernetes.io/projected/89df7552-1d43-46ef-875a-ca5411e102c5-kube-api-access-9q4k9\") pod \"crc-debug-ts552\" (UID: \"89df7552-1d43-46ef-875a-ca5411e102c5\") " pod="openshift-must-gather-v9fbc/crc-debug-ts552" Feb 28 05:39:10 crc kubenswrapper[5014]: I0228 05:39:10.044355 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89df7552-1d43-46ef-875a-ca5411e102c5-host\") pod \"crc-debug-ts552\" (UID: \"89df7552-1d43-46ef-875a-ca5411e102c5\") " pod="openshift-must-gather-v9fbc/crc-debug-ts552" Feb 28 05:39:10 crc kubenswrapper[5014]: I0228 05:39:10.044468 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89df7552-1d43-46ef-875a-ca5411e102c5-host\") pod \"crc-debug-ts552\" (UID: \"89df7552-1d43-46ef-875a-ca5411e102c5\") " pod="openshift-must-gather-v9fbc/crc-debug-ts552" Feb 28 05:39:10 crc kubenswrapper[5014]: I0228 05:39:10.060951 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q4k9\" (UniqueName: \"kubernetes.io/projected/89df7552-1d43-46ef-875a-ca5411e102c5-kube-api-access-9q4k9\") pod \"crc-debug-ts552\" (UID: \"89df7552-1d43-46ef-875a-ca5411e102c5\") " pod="openshift-must-gather-v9fbc/crc-debug-ts552" Feb 28 05:39:10 crc kubenswrapper[5014]: I0228 05:39:10.101737 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v9fbc/crc-debug-ts552" Feb 28 05:39:10 crc kubenswrapper[5014]: W0228 05:39:10.127168 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89df7552_1d43_46ef_875a_ca5411e102c5.slice/crio-245d5d650f822f4441009476194dd34690e1838c85a681503142942685252f08 WatchSource:0}: Error finding container 245d5d650f822f4441009476194dd34690e1838c85a681503142942685252f08: Status 404 returned error can't find the container with id 245d5d650f822f4441009476194dd34690e1838c85a681503142942685252f08 Feb 28 05:39:10 crc kubenswrapper[5014]: I0228 05:39:10.180301 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b28f394b-cc6e-42e4-9e10-02efad4276f1" path="/var/lib/kubelet/pods/b28f394b-cc6e-42e4-9e10-02efad4276f1/volumes" Feb 28 05:39:10 crc kubenswrapper[5014]: I0228 05:39:10.444957 5014 generic.go:334] "Generic (PLEG): container finished" podID="89df7552-1d43-46ef-875a-ca5411e102c5" containerID="7184b0284c51b5a9bc976e13cddc0efcd1ff56f153e4f43c1e1c538fe2388f05" exitCode=0 Feb 28 05:39:10 crc kubenswrapper[5014]: I0228 05:39:10.445207 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v9fbc/crc-debug-ts552" event={"ID":"89df7552-1d43-46ef-875a-ca5411e102c5","Type":"ContainerDied","Data":"7184b0284c51b5a9bc976e13cddc0efcd1ff56f153e4f43c1e1c538fe2388f05"} Feb 28 05:39:10 crc kubenswrapper[5014]: I0228 05:39:10.445233 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v9fbc/crc-debug-ts552" event={"ID":"89df7552-1d43-46ef-875a-ca5411e102c5","Type":"ContainerStarted","Data":"245d5d650f822f4441009476194dd34690e1838c85a681503142942685252f08"} Feb 28 05:39:10 crc kubenswrapper[5014]: I0228 05:39:10.488645 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v9fbc/crc-debug-ts552"] Feb 28 05:39:10 crc kubenswrapper[5014]: I0228 05:39:10.499289 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v9fbc/crc-debug-ts552"] Feb 28 05:39:11 crc kubenswrapper[5014]: I0228 05:39:11.554336 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v9fbc/crc-debug-ts552" Feb 28 05:39:11 crc kubenswrapper[5014]: I0228 05:39:11.670842 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q4k9\" (UniqueName: \"kubernetes.io/projected/89df7552-1d43-46ef-875a-ca5411e102c5-kube-api-access-9q4k9\") pod \"89df7552-1d43-46ef-875a-ca5411e102c5\" (UID: \"89df7552-1d43-46ef-875a-ca5411e102c5\") " Feb 28 05:39:11 crc kubenswrapper[5014]: I0228 05:39:11.670937 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89df7552-1d43-46ef-875a-ca5411e102c5-host\") pod \"89df7552-1d43-46ef-875a-ca5411e102c5\" (UID: \"89df7552-1d43-46ef-875a-ca5411e102c5\") " Feb 28 05:39:11 crc kubenswrapper[5014]: I0228 05:39:11.671143 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89df7552-1d43-46ef-875a-ca5411e102c5-host" (OuterVolumeSpecName: "host") pod "89df7552-1d43-46ef-875a-ca5411e102c5" (UID: "89df7552-1d43-46ef-875a-ca5411e102c5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 28 05:39:11 crc kubenswrapper[5014]: I0228 05:39:11.671840 5014 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/89df7552-1d43-46ef-875a-ca5411e102c5-host\") on node \"crc\" DevicePath \"\"" Feb 28 05:39:11 crc kubenswrapper[5014]: I0228 05:39:11.688363 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89df7552-1d43-46ef-875a-ca5411e102c5-kube-api-access-9q4k9" (OuterVolumeSpecName: "kube-api-access-9q4k9") pod "89df7552-1d43-46ef-875a-ca5411e102c5" (UID: "89df7552-1d43-46ef-875a-ca5411e102c5"). InnerVolumeSpecName "kube-api-access-9q4k9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:39:11 crc kubenswrapper[5014]: I0228 05:39:11.773301 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q4k9\" (UniqueName: \"kubernetes.io/projected/89df7552-1d43-46ef-875a-ca5411e102c5-kube-api-access-9q4k9\") on node \"crc\" DevicePath \"\"" Feb 28 05:39:12 crc kubenswrapper[5014]: I0228 05:39:12.182975 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89df7552-1d43-46ef-875a-ca5411e102c5" path="/var/lib/kubelet/pods/89df7552-1d43-46ef-875a-ca5411e102c5/volumes" Feb 28 05:39:12 crc kubenswrapper[5014]: I0228 05:39:12.470461 5014 scope.go:117] "RemoveContainer" containerID="7184b0284c51b5a9bc976e13cddc0efcd1ff56f153e4f43c1e1c538fe2388f05" Feb 28 05:39:12 crc kubenswrapper[5014]: I0228 05:39:12.470535 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v9fbc/crc-debug-ts552" Feb 28 05:39:22 crc kubenswrapper[5014]: I0228 05:39:22.186169 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:39:22 crc kubenswrapper[5014]: E0228 05:39:22.186995 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:39:33 crc kubenswrapper[5014]: I0228 05:39:33.171468 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:39:33 crc kubenswrapper[5014]: E0228 05:39:33.173402 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:39:43 crc kubenswrapper[5014]: I0228 05:39:43.573359 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-85ff55b8dd-q46np_0c857c36-d78c-484b-a0b1-1cabf11c32a3/barbican-api/0.log" Feb 28 05:39:43 crc kubenswrapper[5014]: I0228 05:39:43.761684 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-85ff55b8dd-q46np_0c857c36-d78c-484b-a0b1-1cabf11c32a3/barbican-api-log/0.log" Feb 28 05:39:43 crc kubenswrapper[5014]: I0228 05:39:43.801465 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7dd8f4645d-ckwth_bd8db062-b379-402e-a83b-291ee7e55bf1/barbican-keystone-listener/0.log" Feb 28 05:39:43 crc kubenswrapper[5014]: I0228 05:39:43.849729 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-7dd8f4645d-ckwth_bd8db062-b379-402e-a83b-291ee7e55bf1/barbican-keystone-listener-log/0.log" Feb 28 05:39:43 crc kubenswrapper[5014]: I0228 05:39:43.929142 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-76c688b599-br8wc_45715325-beb1-4639-bb3c-d466fc6e85ce/barbican-worker/0.log" Feb 28 05:39:44 crc kubenswrapper[5014]: I0228 05:39:44.033033 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-76c688b599-br8wc_45715325-beb1-4639-bb3c-d466fc6e85ce/barbican-worker-log/0.log" Feb 28 05:39:44 crc kubenswrapper[5014]: I0228 05:39:44.122791 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-67gpv_71fc0e19-253e-4cae-b6ee-7efc24398ffa/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:39:44 crc kubenswrapper[5014]: I0228 05:39:44.172451 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:39:44 crc kubenswrapper[5014]: E0228 05:39:44.172745 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:39:44 crc kubenswrapper[5014]: I0228 05:39:44.233915 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_522b8e6d-5531-4436-9c64-fadde40a77df/ceilometer-central-agent/0.log" Feb 28 05:39:44 crc kubenswrapper[5014]: I0228 05:39:44.267167 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_522b8e6d-5531-4436-9c64-fadde40a77df/ceilometer-notification-agent/0.log" Feb 28 05:39:44 crc kubenswrapper[5014]: I0228 05:39:44.332751 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_522b8e6d-5531-4436-9c64-fadde40a77df/proxy-httpd/0.log" Feb 28 05:39:44 crc kubenswrapper[5014]: I0228 05:39:44.415756 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_522b8e6d-5531-4436-9c64-fadde40a77df/sg-core/0.log" Feb 28 05:39:44 crc kubenswrapper[5014]: I0228 05:39:44.450502 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_89500e11-205d-40a6-ba7b-54b76ec65b69/cinder-api/0.log" Feb 28 05:39:44 crc kubenswrapper[5014]: I0228 05:39:44.520099 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_89500e11-205d-40a6-ba7b-54b76ec65b69/cinder-api-log/0.log" Feb 28 05:39:44 crc kubenswrapper[5014]: I0228 05:39:44.666990 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_29a28811-7002-4b5e-a6d7-8c204bc306db/cinder-scheduler/0.log" Feb 28 05:39:44 crc kubenswrapper[5014]: I0228 05:39:44.731299 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_29a28811-7002-4b5e-a6d7-8c204bc306db/probe/0.log" Feb 28 05:39:44 crc kubenswrapper[5014]: I0228 05:39:44.861853 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-ldjp9_169069f4-d382-4045-99a5-cf54af88ee18/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:39:44 crc kubenswrapper[5014]: I0228 05:39:44.936839 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-7cc87_2cf2a283-e04c-4b99-978c-8e8261227a09/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:39:45 crc kubenswrapper[5014]: I0228 05:39:45.070275 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-zxf77_67c40633-8133-430b-8528-2aab67995b17/init/0.log" Feb 28 05:39:45 crc kubenswrapper[5014]: I0228 05:39:45.256373 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-zxf77_67c40633-8133-430b-8528-2aab67995b17/init/0.log" Feb 28 05:39:45 crc kubenswrapper[5014]: I0228 05:39:45.309984 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-55478c4467-zxf77_67c40633-8133-430b-8528-2aab67995b17/dnsmasq-dns/0.log" Feb 28 05:39:45 crc kubenswrapper[5014]: I0228 05:39:45.321001 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-fq52b_92c43e33-7947-4ad2-984a-e2618b76f368/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:39:45 crc kubenswrapper[5014]: I0228 05:39:45.497730 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f2c655a1-25af-4c06-9799-01a3a9fd5e52/glance-httpd/0.log" Feb 28 05:39:45 crc kubenswrapper[5014]: I0228 05:39:45.525039 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_f2c655a1-25af-4c06-9799-01a3a9fd5e52/glance-log/0.log" Feb 28 05:39:45 crc kubenswrapper[5014]: I0228 05:39:45.701056 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b75610f5-509e-4ffa-a5fe-0eaa0dbcce98/glance-httpd/0.log" Feb 28 05:39:45 crc kubenswrapper[5014]: I0228 05:39:45.723771 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_b75610f5-509e-4ffa-a5fe-0eaa0dbcce98/glance-log/0.log" Feb 28 05:39:45 crc kubenswrapper[5014]: I0228 05:39:45.894630 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-c9c88866d-6m8lj_6ee56420-1b4d-4898-97db-d05756b9bb72/horizon/0.log" Feb 28 05:39:46 crc kubenswrapper[5014]: I0228 05:39:46.011369 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-2qq4b_fd7991b4-f7f5-4c3e-b2e6-7ba07d7d15a1/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:39:46 crc kubenswrapper[5014]: I0228 05:39:46.241418 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-c9c88866d-6m8lj_6ee56420-1b4d-4898-97db-d05756b9bb72/horizon-log/0.log" Feb 28 05:39:46 crc kubenswrapper[5014]: I0228 05:39:46.247022 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-qvmfl_2ff06abc-551c-452e-8593-603fb882db21/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:39:46 crc kubenswrapper[5014]: I0228 05:39:46.444003 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-799995d5cd-97xmn_2371f935-6c31-4088-ad79-e3dadd298f40/keystone-api/0.log" Feb 28 05:39:46 crc kubenswrapper[5014]: I0228 05:39:46.494988 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29537581-pvgzn_b2a11b02-95d9-48f6-bb32-afa554e2ec2e/keystone-cron/0.log" Feb 28 05:39:46 crc kubenswrapper[5014]: I0228 05:39:46.609878 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_020d4ca7-8d28-4954-a4a0-c031eb935a21/kube-state-metrics/0.log" Feb 28 05:39:46 crc kubenswrapper[5014]: I0228 05:39:46.697280 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-xz7g6_85e8a1f1-6f8c-4af8-9273-dc37192bea6a/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:39:47 crc kubenswrapper[5014]: I0228 05:39:47.071660 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-58dcfcf9bc-4rtlk_1de3f60c-6e45-4b05-84eb-749e470d4595/neutron-httpd/0.log" Feb 28 05:39:47 crc kubenswrapper[5014]: I0228 05:39:47.092748 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-58dcfcf9bc-4rtlk_1de3f60c-6e45-4b05-84eb-749e470d4595/neutron-api/0.log" Feb 28 05:39:47 crc kubenswrapper[5014]: I0228 05:39:47.115086 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-ztcmq_8746177b-a5ee-41d6-8d6c-94e7eae1082e/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:39:47 crc kubenswrapper[5014]: I0228 05:39:47.579729 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_d4e1baa8-fe04-453a-8462-e7de1e98ba73/nova-api-log/0.log" Feb 28 05:39:47 crc kubenswrapper[5014]: I0228 05:39:47.699339 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_e76b3d9a-ffbe-4d58-9264-1b4ca1528410/nova-cell0-conductor-conductor/0.log" Feb 28 05:39:47 crc kubenswrapper[5014]: I0228 05:39:47.863759 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_01377f7d-9edd-424c-b22e-42fde4e51e95/nova-cell1-conductor-conductor/0.log" Feb 28 05:39:47 crc kubenswrapper[5014]: I0228 05:39:47.880039 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_d4e1baa8-fe04-453a-8462-e7de1e98ba73/nova-api-api/0.log" Feb 28 05:39:48 crc kubenswrapper[5014]: I0228 05:39:48.028032 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_974c3323-4513-41b7-9c2e-7cb58d91d6f1/nova-cell1-novncproxy-novncproxy/0.log" Feb 28 05:39:48 crc kubenswrapper[5014]: I0228 05:39:48.112584 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-62n2s_b2cec974-8eb2-428d-8c59-97af37993f91/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:39:48 crc kubenswrapper[5014]: I0228 05:39:48.280747 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d354f3a0-5e09-438a-bb5d-385b2ab4857f/nova-metadata-log/0.log" Feb 28 05:39:48 crc kubenswrapper[5014]: I0228 05:39:48.527973 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_7b66aa07-e591-474f-b1f0-442147425299/nova-scheduler-scheduler/0.log" Feb 28 05:39:48 crc kubenswrapper[5014]: I0228 05:39:48.542593 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ac71caa8-2f63-4b64-8d37-a1b364b62158/mysql-bootstrap/0.log" Feb 28 05:39:48 crc kubenswrapper[5014]: I0228 05:39:48.738185 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ac71caa8-2f63-4b64-8d37-a1b364b62158/galera/0.log" Feb 28 05:39:48 crc kubenswrapper[5014]: I0228 05:39:48.759472 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_ac71caa8-2f63-4b64-8d37-a1b364b62158/mysql-bootstrap/0.log" Feb 28 05:39:49 crc kubenswrapper[5014]: I0228 05:39:49.110655 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c1c70607-6183-4835-9ce6-fe3ef0d2b6fb/mysql-bootstrap/0.log" Feb 28 05:39:49 crc kubenswrapper[5014]: I0228 05:39:49.297830 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c1c70607-6183-4835-9ce6-fe3ef0d2b6fb/galera/0.log" Feb 28 05:39:49 crc kubenswrapper[5014]: I0228 05:39:49.310793 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_c1c70607-6183-4835-9ce6-fe3ef0d2b6fb/mysql-bootstrap/0.log" Feb 28 05:39:49 crc kubenswrapper[5014]: I0228 05:39:49.418744 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d354f3a0-5e09-438a-bb5d-385b2ab4857f/nova-metadata-metadata/0.log" Feb 28 05:39:49 crc kubenswrapper[5014]: I0228 05:39:49.472426 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_dae41ad3-a997-4a4a-91ab-34175d98fb97/openstackclient/0.log" Feb 28 05:39:49 crc kubenswrapper[5014]: I0228 05:39:49.548984 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-9qps6_02ab5d98-13ab-483d-b32b-a509bedd8ded/ovn-controller/0.log" Feb 28 05:39:49 crc kubenswrapper[5014]: I0228 05:39:49.707679 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-mgzdl_43eb6c14-8ca4-41ba-9ee2-7326edcab237/openstack-network-exporter/0.log" Feb 28 05:39:50 crc kubenswrapper[5014]: I0228 05:39:50.075892 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6vfgk_c3f16040-f11b-405c-b332-7ee5eabac2bd/ovsdb-server-init/0.log" Feb 28 05:39:50 crc kubenswrapper[5014]: I0228 05:39:50.633143 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6vfgk_c3f16040-f11b-405c-b332-7ee5eabac2bd/ovsdb-server-init/0.log" Feb 28 05:39:50 crc kubenswrapper[5014]: I0228 05:39:50.672985 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6vfgk_c3f16040-f11b-405c-b332-7ee5eabac2bd/ovs-vswitchd/0.log" Feb 28 05:39:50 crc kubenswrapper[5014]: I0228 05:39:50.832215 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-6vfgk_c3f16040-f11b-405c-b332-7ee5eabac2bd/ovsdb-server/0.log" Feb 28 05:39:51 crc kubenswrapper[5014]: I0228 05:39:51.300821 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-gdrsz_ab8babaf-acb3-4c27-a8bd-abc56808e9d7/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:39:51 crc kubenswrapper[5014]: I0228 05:39:51.748866 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_22702874-a9ba-4491-aed2-5ef93384150c/ovn-northd/0.log" Feb 28 05:39:52 crc kubenswrapper[5014]: I0228 05:39:52.050639 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_22702874-a9ba-4491-aed2-5ef93384150c/openstack-network-exporter/0.log" Feb 28 05:39:52 crc kubenswrapper[5014]: I0228 05:39:52.190246 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5a44d0e3-2ba4-4d6f-924b-1f516c90a11f/openstack-network-exporter/0.log" Feb 28 05:39:52 crc kubenswrapper[5014]: I0228 05:39:52.274723 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_5a44d0e3-2ba4-4d6f-924b-1f516c90a11f/ovsdbserver-nb/0.log" Feb 28 05:39:52 crc kubenswrapper[5014]: I0228 05:39:52.382308 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_569b1ad4-179c-4852-a5fc-509fe31df812/openstack-network-exporter/0.log" Feb 28 05:39:52 crc kubenswrapper[5014]: I0228 05:39:52.389752 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_569b1ad4-179c-4852-a5fc-509fe31df812/ovsdbserver-sb/0.log" Feb 28 05:39:52 crc kubenswrapper[5014]: I0228 05:39:52.663859 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5cd4874894-s6tz4_c690f68f-407a-4db7-a99c-67cfa5a5833b/placement-api/0.log" Feb 28 05:39:52 crc kubenswrapper[5014]: I0228 05:39:52.728201 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-5cd4874894-s6tz4_c690f68f-407a-4db7-a99c-67cfa5a5833b/placement-log/0.log" Feb 28 05:39:52 crc kubenswrapper[5014]: I0228 05:39:52.936424 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3df93ff6-00cf-4c7f-8971-6d1d78795456/setup-container/0.log" Feb 28 05:39:53 crc kubenswrapper[5014]: I0228 05:39:53.098883 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3df93ff6-00cf-4c7f-8971-6d1d78795456/setup-container/0.log" Feb 28 05:39:53 crc kubenswrapper[5014]: I0228 05:39:53.102536 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_3df93ff6-00cf-4c7f-8971-6d1d78795456/rabbitmq/0.log" Feb 28 05:39:53 crc kubenswrapper[5014]: I0228 05:39:53.201517 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7b0d0bd3-ff23-4098-93fb-debf7681cfce/setup-container/0.log" Feb 28 05:39:53 crc kubenswrapper[5014]: I0228 05:39:53.377408 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7b0d0bd3-ff23-4098-93fb-debf7681cfce/rabbitmq/0.log" Feb 28 05:39:53 crc kubenswrapper[5014]: I0228 05:39:53.418903 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_7b0d0bd3-ff23-4098-93fb-debf7681cfce/setup-container/0.log" Feb 28 05:39:53 crc kubenswrapper[5014]: I0228 05:39:53.508930 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-nrlg4_04a4501f-8652-4960-aa15-e083bf2c5b68/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:39:53 crc kubenswrapper[5014]: I0228 05:39:53.641889 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-86b55_64b99a72-222b-4ead-b368-fe335c674da5/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:39:53 crc kubenswrapper[5014]: I0228 05:39:53.764939 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-qbknj_01598708-a115-4ecd-a957-e78d6dbedfcb/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:39:53 crc kubenswrapper[5014]: I0228 05:39:53.876232 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-mbxvz_3d570627-429c-4a9c-a45a-55d652968c46/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:39:54 crc kubenswrapper[5014]: I0228 05:39:54.054489 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-shsr9_fd843e16-57f4-412b-aeec-d22b9609502f/ssh-known-hosts-edpm-deployment/0.log" Feb 28 05:39:54 crc kubenswrapper[5014]: I0228 05:39:54.210539 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6c68684b95-vvvhf_6d31e889-55bb-4dc4-b470-dcb11b4438a7/proxy-httpd/0.log" Feb 28 05:39:54 crc kubenswrapper[5014]: I0228 05:39:54.245948 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6c68684b95-vvvhf_6d31e889-55bb-4dc4-b470-dcb11b4438a7/proxy-server/0.log" Feb 28 05:39:54 crc kubenswrapper[5014]: I0228 05:39:54.294253 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-dn9mn_15c6e56b-a312-43c9-b627-af4138518fe4/swift-ring-rebalance/0.log" Feb 28 05:39:54 crc kubenswrapper[5014]: I0228 05:39:54.457791 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/account-auditor/0.log" Feb 28 05:39:54 crc kubenswrapper[5014]: I0228 05:39:54.521306 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/account-reaper/0.log" Feb 28 05:39:54 crc kubenswrapper[5014]: I0228 05:39:54.542769 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/account-replicator/0.log" Feb 28 05:39:54 crc kubenswrapper[5014]: I0228 05:39:54.694742 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/container-auditor/0.log" Feb 28 05:39:54 crc kubenswrapper[5014]: I0228 05:39:54.743392 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/account-server/0.log" Feb 28 05:39:54 crc kubenswrapper[5014]: I0228 05:39:54.803563 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/container-server/0.log" Feb 28 05:39:54 crc kubenswrapper[5014]: I0228 05:39:54.840215 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/container-replicator/0.log" Feb 28 05:39:54 crc kubenswrapper[5014]: I0228 05:39:54.999181 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/container-updater/0.log" Feb 28 05:39:55 crc kubenswrapper[5014]: I0228 05:39:55.031587 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/object-auditor/0.log" Feb 28 05:39:55 crc kubenswrapper[5014]: I0228 05:39:55.056754 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/object-expirer/0.log" Feb 28 05:39:55 crc kubenswrapper[5014]: I0228 05:39:55.096765 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/object-replicator/0.log" Feb 28 05:39:55 crc kubenswrapper[5014]: I0228 05:39:55.201531 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/object-server/0.log" Feb 28 05:39:55 crc kubenswrapper[5014]: I0228 05:39:55.303380 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/object-updater/0.log" Feb 28 05:39:55 crc kubenswrapper[5014]: I0228 05:39:55.310248 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/swift-recon-cron/0.log" Feb 28 05:39:55 crc kubenswrapper[5014]: I0228 05:39:55.318478 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_2998e28e-fceb-4daa-a26c-74bffeba0d8f/rsync/0.log" Feb 28 05:39:55 crc kubenswrapper[5014]: I0228 05:39:55.564248 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_2db9b9b7-c55d-4b8b-b51b-cd081afed742/tempest-tests-tempest-tests-runner/0.log" Feb 28 05:39:55 crc kubenswrapper[5014]: I0228 05:39:55.655532 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-b4wl5_8bf54c30-88fb-46eb-8949-e2231e958201/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:39:55 crc kubenswrapper[5014]: I0228 05:39:55.824985 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_729e0ea7-49de-4e76-9921-8911ce80452e/test-operator-logs-container/0.log" Feb 28 05:39:55 crc kubenswrapper[5014]: I0228 05:39:55.894598 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-8r6gv_5551729e-bd25-4c6c-b3d6-24a339aeab5c/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 28 05:39:59 crc kubenswrapper[5014]: I0228 05:39:59.171641 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:39:59 crc kubenswrapper[5014]: E0228 05:39:59.172362 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:40:00 crc kubenswrapper[5014]: I0228 05:40:00.155638 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537620-rp6br"] Feb 28 05:40:00 crc kubenswrapper[5014]: E0228 05:40:00.156322 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89df7552-1d43-46ef-875a-ca5411e102c5" containerName="container-00" Feb 28 05:40:00 crc kubenswrapper[5014]: I0228 05:40:00.156339 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="89df7552-1d43-46ef-875a-ca5411e102c5" containerName="container-00" Feb 28 05:40:00 crc kubenswrapper[5014]: I0228 05:40:00.156538 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="89df7552-1d43-46ef-875a-ca5411e102c5" containerName="container-00" Feb 28 05:40:00 crc kubenswrapper[5014]: I0228 05:40:00.157191 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537620-rp6br" Feb 28 05:40:00 crc kubenswrapper[5014]: I0228 05:40:00.159031 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:40:00 crc kubenswrapper[5014]: I0228 05:40:00.159266 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:40:00 crc kubenswrapper[5014]: I0228 05:40:00.159395 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:40:00 crc kubenswrapper[5014]: I0228 05:40:00.180769 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537620-rp6br"] Feb 28 05:40:00 crc kubenswrapper[5014]: I0228 05:40:00.321910 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td67s\" (UniqueName: \"kubernetes.io/projected/350ac0d4-0be2-4765-85f4-2305c7ae8971-kube-api-access-td67s\") pod \"auto-csr-approver-29537620-rp6br\" (UID: \"350ac0d4-0be2-4765-85f4-2305c7ae8971\") " pod="openshift-infra/auto-csr-approver-29537620-rp6br" Feb 28 05:40:00 crc kubenswrapper[5014]: I0228 05:40:00.434072 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td67s\" (UniqueName: \"kubernetes.io/projected/350ac0d4-0be2-4765-85f4-2305c7ae8971-kube-api-access-td67s\") pod \"auto-csr-approver-29537620-rp6br\" (UID: \"350ac0d4-0be2-4765-85f4-2305c7ae8971\") " pod="openshift-infra/auto-csr-approver-29537620-rp6br" Feb 28 05:40:00 crc kubenswrapper[5014]: I0228 05:40:00.462879 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td67s\" (UniqueName: \"kubernetes.io/projected/350ac0d4-0be2-4765-85f4-2305c7ae8971-kube-api-access-td67s\") pod \"auto-csr-approver-29537620-rp6br\" (UID: \"350ac0d4-0be2-4765-85f4-2305c7ae8971\") " pod="openshift-infra/auto-csr-approver-29537620-rp6br" Feb 28 05:40:00 crc kubenswrapper[5014]: I0228 05:40:00.479240 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537620-rp6br" Feb 28 05:40:00 crc kubenswrapper[5014]: I0228 05:40:00.928452 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537620-rp6br"] Feb 28 05:40:00 crc kubenswrapper[5014]: I0228 05:40:00.929398 5014 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 05:40:00 crc kubenswrapper[5014]: I0228 05:40:00.972918 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537620-rp6br" event={"ID":"350ac0d4-0be2-4765-85f4-2305c7ae8971","Type":"ContainerStarted","Data":"8f804722e3af850e5a7f8ad61e1dabc2445302b09c27d6d9e6f2b5c4849f4dbe"} Feb 28 05:40:01 crc kubenswrapper[5014]: I0228 05:40:01.265732 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_1420f298-151a-48af-bdb2-a58d5143967c/memcached/0.log" Feb 28 05:40:02 crc kubenswrapper[5014]: I0228 05:40:02.997318 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537620-rp6br" event={"ID":"350ac0d4-0be2-4765-85f4-2305c7ae8971","Type":"ContainerStarted","Data":"4bc253b5d178348d9346b2163eb96300a8270858ebb5c4f1035f66810f361084"} Feb 28 05:40:03 crc kubenswrapper[5014]: I0228 05:40:03.014996 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537620-rp6br" podStartSLOduration=1.62257309 podStartE2EDuration="3.014980749s" podCreationTimestamp="2026-02-28 05:40:00 +0000 UTC" firstStartedPulling="2026-02-28 05:40:00.929196527 +0000 UTC m=+3989.599322437" lastFinishedPulling="2026-02-28 05:40:02.321604186 +0000 UTC m=+3990.991730096" observedRunningTime="2026-02-28 05:40:03.011946287 +0000 UTC m=+3991.682072197" watchObservedRunningTime="2026-02-28 05:40:03.014980749 +0000 UTC m=+3991.685106659" Feb 28 05:40:04 crc kubenswrapper[5014]: I0228 05:40:04.005331 5014 generic.go:334] "Generic (PLEG): container finished" podID="350ac0d4-0be2-4765-85f4-2305c7ae8971" containerID="4bc253b5d178348d9346b2163eb96300a8270858ebb5c4f1035f66810f361084" exitCode=0 Feb 28 05:40:04 crc kubenswrapper[5014]: I0228 05:40:04.005372 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537620-rp6br" event={"ID":"350ac0d4-0be2-4765-85f4-2305c7ae8971","Type":"ContainerDied","Data":"4bc253b5d178348d9346b2163eb96300a8270858ebb5c4f1035f66810f361084"} Feb 28 05:40:05 crc kubenswrapper[5014]: I0228 05:40:05.498592 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537620-rp6br" Feb 28 05:40:05 crc kubenswrapper[5014]: I0228 05:40:05.526501 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td67s\" (UniqueName: \"kubernetes.io/projected/350ac0d4-0be2-4765-85f4-2305c7ae8971-kube-api-access-td67s\") pod \"350ac0d4-0be2-4765-85f4-2305c7ae8971\" (UID: \"350ac0d4-0be2-4765-85f4-2305c7ae8971\") " Feb 28 05:40:05 crc kubenswrapper[5014]: I0228 05:40:05.531957 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/350ac0d4-0be2-4765-85f4-2305c7ae8971-kube-api-access-td67s" (OuterVolumeSpecName: "kube-api-access-td67s") pod "350ac0d4-0be2-4765-85f4-2305c7ae8971" (UID: "350ac0d4-0be2-4765-85f4-2305c7ae8971"). InnerVolumeSpecName "kube-api-access-td67s". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:40:05 crc kubenswrapper[5014]: I0228 05:40:05.628967 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td67s\" (UniqueName: \"kubernetes.io/projected/350ac0d4-0be2-4765-85f4-2305c7ae8971-kube-api-access-td67s\") on node \"crc\" DevicePath \"\"" Feb 28 05:40:06 crc kubenswrapper[5014]: I0228 05:40:06.022674 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537620-rp6br" event={"ID":"350ac0d4-0be2-4765-85f4-2305c7ae8971","Type":"ContainerDied","Data":"8f804722e3af850e5a7f8ad61e1dabc2445302b09c27d6d9e6f2b5c4849f4dbe"} Feb 28 05:40:06 crc kubenswrapper[5014]: I0228 05:40:06.022739 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f804722e3af850e5a7f8ad61e1dabc2445302b09c27d6d9e6f2b5c4849f4dbe" Feb 28 05:40:06 crc kubenswrapper[5014]: I0228 05:40:06.022766 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537620-rp6br" Feb 28 05:40:06 crc kubenswrapper[5014]: I0228 05:40:06.556115 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537614-mj8kv"] Feb 28 05:40:06 crc kubenswrapper[5014]: I0228 05:40:06.562996 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537614-mj8kv"] Feb 28 05:40:08 crc kubenswrapper[5014]: I0228 05:40:08.181028 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ed45fbe-6686-42e2-9d85-7da2fb54784c" path="/var/lib/kubelet/pods/7ed45fbe-6686-42e2-9d85-7da2fb54784c/volumes" Feb 28 05:40:14 crc kubenswrapper[5014]: I0228 05:40:14.172076 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:40:14 crc kubenswrapper[5014]: E0228 05:40:14.173045 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:40:24 crc kubenswrapper[5014]: I0228 05:40:24.869054 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-5d87c9d997-587tn_52707aa4-b40d-4046-a721-e3b31a1f9648/manager/0.log" Feb 28 05:40:25 crc kubenswrapper[5014]: I0228 05:40:25.085973 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm_bf32d2bd-8642-45d7-ae34-876531251b37/util/0.log" Feb 28 05:40:25 crc kubenswrapper[5014]: I0228 05:40:25.346141 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm_bf32d2bd-8642-45d7-ae34-876531251b37/util/0.log" Feb 28 05:40:25 crc kubenswrapper[5014]: I0228 05:40:25.379428 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm_bf32d2bd-8642-45d7-ae34-876531251b37/pull/0.log" Feb 28 05:40:25 crc kubenswrapper[5014]: I0228 05:40:25.573058 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm_bf32d2bd-8642-45d7-ae34-876531251b37/pull/0.log" Feb 28 05:40:25 crc kubenswrapper[5014]: I0228 05:40:25.704033 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm_bf32d2bd-8642-45d7-ae34-876531251b37/util/0.log" Feb 28 05:40:25 crc kubenswrapper[5014]: I0228 05:40:25.779459 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm_bf32d2bd-8642-45d7-ae34-876531251b37/pull/0.log" Feb 28 05:40:25 crc kubenswrapper[5014]: I0228 05:40:25.911878 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_e0c67eace496679f2ffd5045f577eb939c19542046b79e00b171d4b4ed76zpm_bf32d2bd-8642-45d7-ae34-876531251b37/extract/0.log" Feb 28 05:40:26 crc kubenswrapper[5014]: I0228 05:40:26.221331 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-64db6967f8-5t42k_f734a97b-b94d-4132-a426-15111b3fc207/manager/0.log" Feb 28 05:40:26 crc kubenswrapper[5014]: I0228 05:40:26.385608 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-cf99c678f-2srvx_9fe3aab0-3f3b-4fb3-a5da-2206ba55e813/manager/0.log" Feb 28 05:40:26 crc kubenswrapper[5014]: I0228 05:40:26.525453 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-n9r5r_385767a3-7908-4f17-9f63-ea25c784c715/manager/0.log" Feb 28 05:40:26 crc kubenswrapper[5014]: I0228 05:40:26.644006 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-78bc7f9bd9-ppf6c_5b9d913b-e0e8-42f5-8d98-60fd3c219ff8/manager/0.log" Feb 28 05:40:26 crc kubenswrapper[5014]: I0228 05:40:26.821051 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-545456dc4-7bmg5_dd26043c-48bc-4202-8266-d2590b6530e3/manager/0.log" Feb 28 05:40:27 crc kubenswrapper[5014]: I0228 05:40:27.125964 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-786bd545f6-8hp88_0535be64-bda6-4b55-9eb1-fe5a86d3cae8/manager/0.log" Feb 28 05:40:27 crc kubenswrapper[5014]: I0228 05:40:27.148227 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7c789f89c6-cfb47_42fc68c6-e92f-4449-9398-518f904c58fb/manager/0.log" Feb 28 05:40:27 crc kubenswrapper[5014]: I0228 05:40:27.353833 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-gm8rn_5189b3c2-1b93-432b-b1a3-dc579ef2abb6/manager/0.log" Feb 28 05:40:27 crc kubenswrapper[5014]: I0228 05:40:27.408768 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-7b6bfb6475-s4j6f_5f8b5a91-a57a-4679-a625-007592105038/manager/0.log" Feb 28 05:40:27 crc kubenswrapper[5014]: I0228 05:40:27.694096 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54688575f-xpd29_f5555801-1739-45d3-946f-3b731b87c593/manager/0.log" Feb 28 05:40:27 crc kubenswrapper[5014]: I0228 05:40:27.886551 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5d86c7ddb7-pl8nn_895709de-d62e-4101-8294-d73238790d9c/manager/0.log" Feb 28 05:40:27 crc kubenswrapper[5014]: I0228 05:40:27.889739 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-74b6b5dc96-rcb7d_d56f3210-6165-4bd1-b2e0-d8eb94b370a9/manager/0.log" Feb 28 05:40:28 crc kubenswrapper[5014]: I0228 05:40:28.188429 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9cpqdwj_7c84fa60-3777-4544-84ce-abc199e9df18/manager/0.log" Feb 28 05:40:28 crc kubenswrapper[5014]: I0228 05:40:28.339558 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-dddf4b8c5-khjpf_d6538cec-6b14-4d19-92b6-e1ada175e8a8/operator/0.log" Feb 28 05:40:28 crc kubenswrapper[5014]: I0228 05:40:28.445019 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-2vp6x_55997ed6-05a0-420d-bdaf-5d27ea9e0cf2/registry-server/0.log" Feb 28 05:40:28 crc kubenswrapper[5014]: I0228 05:40:28.770383 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-75684d597f-pg7jw_67e3c7dc-a78f-4039-b326-93795dd322ca/manager/0.log" Feb 28 05:40:28 crc kubenswrapper[5014]: I0228 05:40:28.936258 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-648564c9fc-gjdqb_07f212b7-6aea-4a43-95fa-4637b6dc1d87/manager/0.log" Feb 28 05:40:29 crc kubenswrapper[5014]: I0228 05:40:29.070663 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-c4ptt_90ad3ca4-2470-4ab2-9e22-17db53a7237d/operator/0.log" Feb 28 05:40:29 crc kubenswrapper[5014]: I0228 05:40:29.171667 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:40:29 crc kubenswrapper[5014]: E0228 05:40:29.172070 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:40:29 crc kubenswrapper[5014]: I0228 05:40:29.219180 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9b9ff9f4d-snccq_f254469c-2cb3-4f38-8c52-960aa17d27fe/manager/0.log" Feb 28 05:40:29 crc kubenswrapper[5014]: I0228 05:40:29.513346 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5fdb694969-82d7x_1089c9f7-0d91-4639-9890-c41acc881797/manager/0.log" Feb 28 05:40:29 crc kubenswrapper[5014]: I0228 05:40:29.859225 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-55b5ff4dbb-clg6t_d9ccc996-b3d9-44f1-8a6e-c58517885a7c/manager/0.log" Feb 28 05:40:29 crc kubenswrapper[5014]: I0228 05:40:29.978425 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-975zn_f229b3d6-46dd-42ab-bb96-c207b02b35d0/manager/0.log" Feb 28 05:40:30 crc kubenswrapper[5014]: I0228 05:40:30.049852 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-76974fc5d7-9d7k5_b65e9823-17a7-42da-9191-af1db70355b9/manager/0.log" Feb 28 05:40:33 crc kubenswrapper[5014]: I0228 05:40:33.722458 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6db6876945-p2g4k_7dfedb71-1284-4e5c-826d-efb134b34cdb/manager/0.log" Feb 28 05:40:44 crc kubenswrapper[5014]: I0228 05:40:44.173119 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:40:44 crc kubenswrapper[5014]: E0228 05:40:44.174314 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:40:53 crc kubenswrapper[5014]: I0228 05:40:53.538198 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-z87qr_8bf7d4c6-1fd5-4fa4-a7a3-bf5af08d7eba/control-plane-machine-set-operator/0.log" Feb 28 05:40:53 crc kubenswrapper[5014]: I0228 05:40:53.684320 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bpskb_c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6/machine-api-operator/0.log" Feb 28 05:40:53 crc kubenswrapper[5014]: I0228 05:40:53.719169 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-bpskb_c33c5a22-b0a7-4e91-9d5b-28b9908fbfd6/kube-rbac-proxy/0.log" Feb 28 05:40:57 crc kubenswrapper[5014]: I0228 05:40:57.171338 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:40:57 crc kubenswrapper[5014]: E0228 05:40:57.171786 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:41:00 crc kubenswrapper[5014]: I0228 05:41:00.037243 5014 scope.go:117] "RemoveContainer" containerID="ead0c225a7dc35995ddbed05655df713bac795f55f959e804eacea7f3d3ff92c" Feb 28 05:41:09 crc kubenswrapper[5014]: I0228 05:41:09.391610 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-lnv49_efbeff5a-c04c-47c0-8c97-338798ffc76b/cert-manager-controller/0.log" Feb 28 05:41:09 crc kubenswrapper[5014]: I0228 05:41:09.643862 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-pwx6w_74306563-899f-44f1-b51a-e9aed7bd437c/cert-manager-cainjector/0.log" Feb 28 05:41:09 crc kubenswrapper[5014]: I0228 05:41:09.656245 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-gwzqp_f921b55b-c9e9-4183-a430-192642dc2b06/cert-manager-webhook/0.log" Feb 28 05:41:10 crc kubenswrapper[5014]: I0228 05:41:10.171722 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:41:10 crc kubenswrapper[5014]: E0228 05:41:10.171977 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:41:20 crc kubenswrapper[5014]: I0228 05:41:20.266847 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rltfc"] Feb 28 05:41:20 crc kubenswrapper[5014]: E0228 05:41:20.267991 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="350ac0d4-0be2-4765-85f4-2305c7ae8971" containerName="oc" Feb 28 05:41:20 crc kubenswrapper[5014]: I0228 05:41:20.268013 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="350ac0d4-0be2-4765-85f4-2305c7ae8971" containerName="oc" Feb 28 05:41:20 crc kubenswrapper[5014]: I0228 05:41:20.268402 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="350ac0d4-0be2-4765-85f4-2305c7ae8971" containerName="oc" Feb 28 05:41:20 crc kubenswrapper[5014]: I0228 05:41:20.314108 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rltfc"] Feb 28 05:41:20 crc kubenswrapper[5014]: I0228 05:41:20.314332 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rltfc" Feb 28 05:41:20 crc kubenswrapper[5014]: I0228 05:41:20.439957 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rqp2\" (UniqueName: \"kubernetes.io/projected/51ecd474-a8ea-4849-8a90-ef530ce65e82-kube-api-access-4rqp2\") pod \"community-operators-rltfc\" (UID: \"51ecd474-a8ea-4849-8a90-ef530ce65e82\") " pod="openshift-marketplace/community-operators-rltfc" Feb 28 05:41:20 crc kubenswrapper[5014]: I0228 05:41:20.440114 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ecd474-a8ea-4849-8a90-ef530ce65e82-catalog-content\") pod \"community-operators-rltfc\" (UID: \"51ecd474-a8ea-4849-8a90-ef530ce65e82\") " pod="openshift-marketplace/community-operators-rltfc" Feb 28 05:41:20 crc kubenswrapper[5014]: I0228 05:41:20.440299 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ecd474-a8ea-4849-8a90-ef530ce65e82-utilities\") pod \"community-operators-rltfc\" (UID: \"51ecd474-a8ea-4849-8a90-ef530ce65e82\") " pod="openshift-marketplace/community-operators-rltfc" Feb 28 05:41:20 crc kubenswrapper[5014]: I0228 05:41:20.542533 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ecd474-a8ea-4849-8a90-ef530ce65e82-catalog-content\") pod \"community-operators-rltfc\" (UID: \"51ecd474-a8ea-4849-8a90-ef530ce65e82\") " pod="openshift-marketplace/community-operators-rltfc" Feb 28 05:41:20 crc kubenswrapper[5014]: I0228 05:41:20.542624 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ecd474-a8ea-4849-8a90-ef530ce65e82-utilities\") pod \"community-operators-rltfc\" (UID: \"51ecd474-a8ea-4849-8a90-ef530ce65e82\") " pod="openshift-marketplace/community-operators-rltfc" Feb 28 05:41:20 crc kubenswrapper[5014]: I0228 05:41:20.542709 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rqp2\" (UniqueName: \"kubernetes.io/projected/51ecd474-a8ea-4849-8a90-ef530ce65e82-kube-api-access-4rqp2\") pod \"community-operators-rltfc\" (UID: \"51ecd474-a8ea-4849-8a90-ef530ce65e82\") " pod="openshift-marketplace/community-operators-rltfc" Feb 28 05:41:20 crc kubenswrapper[5014]: I0228 05:41:20.543570 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ecd474-a8ea-4849-8a90-ef530ce65e82-utilities\") pod \"community-operators-rltfc\" (UID: \"51ecd474-a8ea-4849-8a90-ef530ce65e82\") " pod="openshift-marketplace/community-operators-rltfc" Feb 28 05:41:20 crc kubenswrapper[5014]: I0228 05:41:20.543612 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ecd474-a8ea-4849-8a90-ef530ce65e82-catalog-content\") pod \"community-operators-rltfc\" (UID: \"51ecd474-a8ea-4849-8a90-ef530ce65e82\") " pod="openshift-marketplace/community-operators-rltfc" Feb 28 05:41:20 crc kubenswrapper[5014]: I0228 05:41:20.562993 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rqp2\" (UniqueName: \"kubernetes.io/projected/51ecd474-a8ea-4849-8a90-ef530ce65e82-kube-api-access-4rqp2\") pod \"community-operators-rltfc\" (UID: \"51ecd474-a8ea-4849-8a90-ef530ce65e82\") " pod="openshift-marketplace/community-operators-rltfc" Feb 28 05:41:20 crc kubenswrapper[5014]: I0228 05:41:20.665534 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rltfc" Feb 28 05:41:21 crc kubenswrapper[5014]: I0228 05:41:21.252153 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rltfc"] Feb 28 05:41:21 crc kubenswrapper[5014]: I0228 05:41:21.776130 5014 generic.go:334] "Generic (PLEG): container finished" podID="51ecd474-a8ea-4849-8a90-ef530ce65e82" containerID="57d446d64b67c0818cad06c3971d182b345c0e57325fbb048babf0c677237a71" exitCode=0 Feb 28 05:41:21 crc kubenswrapper[5014]: I0228 05:41:21.776306 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rltfc" event={"ID":"51ecd474-a8ea-4849-8a90-ef530ce65e82","Type":"ContainerDied","Data":"57d446d64b67c0818cad06c3971d182b345c0e57325fbb048babf0c677237a71"} Feb 28 05:41:21 crc kubenswrapper[5014]: I0228 05:41:21.776616 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rltfc" event={"ID":"51ecd474-a8ea-4849-8a90-ef530ce65e82","Type":"ContainerStarted","Data":"585d876219147c0178ecb57f28ddf45bded17f3f28c279e0a437a97d72ea8ef1"} Feb 28 05:41:22 crc kubenswrapper[5014]: I0228 05:41:22.798913 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rltfc" event={"ID":"51ecd474-a8ea-4849-8a90-ef530ce65e82","Type":"ContainerStarted","Data":"4cb1fc9141f4c2696af1df75af37098a8406efb1c2ba746210f48e594f41dbb4"} Feb 28 05:41:23 crc kubenswrapper[5014]: I0228 05:41:23.172615 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:41:23 crc kubenswrapper[5014]: E0228 05:41:23.173089 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:41:23 crc kubenswrapper[5014]: I0228 05:41:23.808001 5014 generic.go:334] "Generic (PLEG): container finished" podID="51ecd474-a8ea-4849-8a90-ef530ce65e82" containerID="4cb1fc9141f4c2696af1df75af37098a8406efb1c2ba746210f48e594f41dbb4" exitCode=0 Feb 28 05:41:23 crc kubenswrapper[5014]: I0228 05:41:23.808039 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rltfc" event={"ID":"51ecd474-a8ea-4849-8a90-ef530ce65e82","Type":"ContainerDied","Data":"4cb1fc9141f4c2696af1df75af37098a8406efb1c2ba746210f48e594f41dbb4"} Feb 28 05:41:24 crc kubenswrapper[5014]: I0228 05:41:24.826111 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rltfc" event={"ID":"51ecd474-a8ea-4849-8a90-ef530ce65e82","Type":"ContainerStarted","Data":"7e8170163145d3a1ccd07e4e8cede012029a6e0f490c2bc0dcfcad32206706c0"} Feb 28 05:41:24 crc kubenswrapper[5014]: I0228 05:41:24.848748 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rltfc" podStartSLOduration=2.386820491 podStartE2EDuration="4.848728387s" podCreationTimestamp="2026-02-28 05:41:20 +0000 UTC" firstStartedPulling="2026-02-28 05:41:21.779714852 +0000 UTC m=+4070.449840772" lastFinishedPulling="2026-02-28 05:41:24.241622758 +0000 UTC m=+4072.911748668" observedRunningTime="2026-02-28 05:41:24.841458741 +0000 UTC m=+4073.511584641" watchObservedRunningTime="2026-02-28 05:41:24.848728387 +0000 UTC m=+4073.518854297" Feb 28 05:41:25 crc kubenswrapper[5014]: I0228 05:41:25.013969 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-9dtw6_77ea3bfd-fad5-4789-8930-d7b7148453b2/nmstate-console-plugin/0.log" Feb 28 05:41:25 crc kubenswrapper[5014]: I0228 05:41:25.243382 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-qn5jv_057e43b5-a9ff-43d5-9f75-e9add271d1a6/nmstate-handler/0.log" Feb 28 05:41:25 crc kubenswrapper[5014]: I0228 05:41:25.306633 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-qktq9_72580a24-d267-4917-955f-639fb9600a27/kube-rbac-proxy/0.log" Feb 28 05:41:25 crc kubenswrapper[5014]: I0228 05:41:25.310969 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-qktq9_72580a24-d267-4917-955f-639fb9600a27/nmstate-metrics/0.log" Feb 28 05:41:25 crc kubenswrapper[5014]: I0228 05:41:25.555135 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-hdp54_1a5c4be4-d285-425e-bd4b-26cbf4d48b0e/nmstate-operator/0.log" Feb 28 05:41:25 crc kubenswrapper[5014]: I0228 05:41:25.604992 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-lpxlh_7116c3b6-8ec4-42af-9739-9c4b1ea6e7c6/nmstate-webhook/0.log" Feb 28 05:41:30 crc kubenswrapper[5014]: I0228 05:41:30.666220 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rltfc" Feb 28 05:41:30 crc kubenswrapper[5014]: I0228 05:41:30.666707 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rltfc" Feb 28 05:41:30 crc kubenswrapper[5014]: I0228 05:41:30.762488 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rltfc" Feb 28 05:41:30 crc kubenswrapper[5014]: I0228 05:41:30.928267 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rltfc" Feb 28 05:41:31 crc kubenswrapper[5014]: I0228 05:41:31.012435 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rltfc"] Feb 28 05:41:32 crc kubenswrapper[5014]: I0228 05:41:32.908203 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rltfc" podUID="51ecd474-a8ea-4849-8a90-ef530ce65e82" containerName="registry-server" containerID="cri-o://7e8170163145d3a1ccd07e4e8cede012029a6e0f490c2bc0dcfcad32206706c0" gracePeriod=2 Feb 28 05:41:33 crc kubenswrapper[5014]: I0228 05:41:33.458192 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rltfc" Feb 28 05:41:33 crc kubenswrapper[5014]: I0228 05:41:33.476283 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ecd474-a8ea-4849-8a90-ef530ce65e82-utilities\") pod \"51ecd474-a8ea-4849-8a90-ef530ce65e82\" (UID: \"51ecd474-a8ea-4849-8a90-ef530ce65e82\") " Feb 28 05:41:33 crc kubenswrapper[5014]: I0228 05:41:33.476441 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rqp2\" (UniqueName: \"kubernetes.io/projected/51ecd474-a8ea-4849-8a90-ef530ce65e82-kube-api-access-4rqp2\") pod \"51ecd474-a8ea-4849-8a90-ef530ce65e82\" (UID: \"51ecd474-a8ea-4849-8a90-ef530ce65e82\") " Feb 28 05:41:33 crc kubenswrapper[5014]: I0228 05:41:33.476513 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ecd474-a8ea-4849-8a90-ef530ce65e82-catalog-content\") pod \"51ecd474-a8ea-4849-8a90-ef530ce65e82\" (UID: \"51ecd474-a8ea-4849-8a90-ef530ce65e82\") " Feb 28 05:41:33 crc kubenswrapper[5014]: I0228 05:41:33.477069 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51ecd474-a8ea-4849-8a90-ef530ce65e82-utilities" (OuterVolumeSpecName: "utilities") pod "51ecd474-a8ea-4849-8a90-ef530ce65e82" (UID: "51ecd474-a8ea-4849-8a90-ef530ce65e82"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:41:33 crc kubenswrapper[5014]: I0228 05:41:33.483952 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ecd474-a8ea-4849-8a90-ef530ce65e82-kube-api-access-4rqp2" (OuterVolumeSpecName: "kube-api-access-4rqp2") pod "51ecd474-a8ea-4849-8a90-ef530ce65e82" (UID: "51ecd474-a8ea-4849-8a90-ef530ce65e82"). InnerVolumeSpecName "kube-api-access-4rqp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:41:33 crc kubenswrapper[5014]: I0228 05:41:33.525847 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51ecd474-a8ea-4849-8a90-ef530ce65e82-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51ecd474-a8ea-4849-8a90-ef530ce65e82" (UID: "51ecd474-a8ea-4849-8a90-ef530ce65e82"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:41:33 crc kubenswrapper[5014]: I0228 05:41:33.579083 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51ecd474-a8ea-4849-8a90-ef530ce65e82-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 05:41:33 crc kubenswrapper[5014]: I0228 05:41:33.579122 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rqp2\" (UniqueName: \"kubernetes.io/projected/51ecd474-a8ea-4849-8a90-ef530ce65e82-kube-api-access-4rqp2\") on node \"crc\" DevicePath \"\"" Feb 28 05:41:33 crc kubenswrapper[5014]: I0228 05:41:33.579136 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51ecd474-a8ea-4849-8a90-ef530ce65e82-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 05:41:33 crc kubenswrapper[5014]: I0228 05:41:33.921322 5014 generic.go:334] "Generic (PLEG): container finished" podID="51ecd474-a8ea-4849-8a90-ef530ce65e82" containerID="7e8170163145d3a1ccd07e4e8cede012029a6e0f490c2bc0dcfcad32206706c0" exitCode=0 Feb 28 05:41:33 crc kubenswrapper[5014]: I0228 05:41:33.921361 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rltfc" event={"ID":"51ecd474-a8ea-4849-8a90-ef530ce65e82","Type":"ContainerDied","Data":"7e8170163145d3a1ccd07e4e8cede012029a6e0f490c2bc0dcfcad32206706c0"} Feb 28 05:41:33 crc kubenswrapper[5014]: I0228 05:41:33.921390 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rltfc" event={"ID":"51ecd474-a8ea-4849-8a90-ef530ce65e82","Type":"ContainerDied","Data":"585d876219147c0178ecb57f28ddf45bded17f3f28c279e0a437a97d72ea8ef1"} Feb 28 05:41:33 crc kubenswrapper[5014]: I0228 05:41:33.921405 5014 scope.go:117] "RemoveContainer" containerID="7e8170163145d3a1ccd07e4e8cede012029a6e0f490c2bc0dcfcad32206706c0" Feb 28 05:41:33 crc kubenswrapper[5014]: I0228 05:41:33.923511 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rltfc" Feb 28 05:41:33 crc kubenswrapper[5014]: I0228 05:41:33.945528 5014 scope.go:117] "RemoveContainer" containerID="4cb1fc9141f4c2696af1df75af37098a8406efb1c2ba746210f48e594f41dbb4" Feb 28 05:41:33 crc kubenswrapper[5014]: I0228 05:41:33.974663 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rltfc"] Feb 28 05:41:33 crc kubenswrapper[5014]: I0228 05:41:33.983707 5014 scope.go:117] "RemoveContainer" containerID="57d446d64b67c0818cad06c3971d182b345c0e57325fbb048babf0c677237a71" Feb 28 05:41:33 crc kubenswrapper[5014]: I0228 05:41:33.996857 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rltfc"] Feb 28 05:41:34 crc kubenswrapper[5014]: I0228 05:41:34.045007 5014 scope.go:117] "RemoveContainer" containerID="7e8170163145d3a1ccd07e4e8cede012029a6e0f490c2bc0dcfcad32206706c0" Feb 28 05:41:34 crc kubenswrapper[5014]: E0228 05:41:34.046001 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e8170163145d3a1ccd07e4e8cede012029a6e0f490c2bc0dcfcad32206706c0\": container with ID starting with 7e8170163145d3a1ccd07e4e8cede012029a6e0f490c2bc0dcfcad32206706c0 not found: ID does not exist" containerID="7e8170163145d3a1ccd07e4e8cede012029a6e0f490c2bc0dcfcad32206706c0" Feb 28 05:41:34 crc kubenswrapper[5014]: I0228 05:41:34.046065 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e8170163145d3a1ccd07e4e8cede012029a6e0f490c2bc0dcfcad32206706c0"} err="failed to get container status \"7e8170163145d3a1ccd07e4e8cede012029a6e0f490c2bc0dcfcad32206706c0\": rpc error: code = NotFound desc = could not find container \"7e8170163145d3a1ccd07e4e8cede012029a6e0f490c2bc0dcfcad32206706c0\": container with ID starting with 7e8170163145d3a1ccd07e4e8cede012029a6e0f490c2bc0dcfcad32206706c0 not found: ID does not exist" Feb 28 05:41:34 crc kubenswrapper[5014]: I0228 05:41:34.046106 5014 scope.go:117] "RemoveContainer" containerID="4cb1fc9141f4c2696af1df75af37098a8406efb1c2ba746210f48e594f41dbb4" Feb 28 05:41:34 crc kubenswrapper[5014]: E0228 05:41:34.046783 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cb1fc9141f4c2696af1df75af37098a8406efb1c2ba746210f48e594f41dbb4\": container with ID starting with 4cb1fc9141f4c2696af1df75af37098a8406efb1c2ba746210f48e594f41dbb4 not found: ID does not exist" containerID="4cb1fc9141f4c2696af1df75af37098a8406efb1c2ba746210f48e594f41dbb4" Feb 28 05:41:34 crc kubenswrapper[5014]: I0228 05:41:34.046940 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cb1fc9141f4c2696af1df75af37098a8406efb1c2ba746210f48e594f41dbb4"} err="failed to get container status \"4cb1fc9141f4c2696af1df75af37098a8406efb1c2ba746210f48e594f41dbb4\": rpc error: code = NotFound desc = could not find container \"4cb1fc9141f4c2696af1df75af37098a8406efb1c2ba746210f48e594f41dbb4\": container with ID starting with 4cb1fc9141f4c2696af1df75af37098a8406efb1c2ba746210f48e594f41dbb4 not found: ID does not exist" Feb 28 05:41:34 crc kubenswrapper[5014]: I0228 05:41:34.047021 5014 scope.go:117] "RemoveContainer" containerID="57d446d64b67c0818cad06c3971d182b345c0e57325fbb048babf0c677237a71" Feb 28 05:41:34 crc kubenswrapper[5014]: E0228 05:41:34.047438 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57d446d64b67c0818cad06c3971d182b345c0e57325fbb048babf0c677237a71\": container with ID starting with 57d446d64b67c0818cad06c3971d182b345c0e57325fbb048babf0c677237a71 not found: ID does not exist" containerID="57d446d64b67c0818cad06c3971d182b345c0e57325fbb048babf0c677237a71" Feb 28 05:41:34 crc kubenswrapper[5014]: I0228 05:41:34.047498 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57d446d64b67c0818cad06c3971d182b345c0e57325fbb048babf0c677237a71"} err="failed to get container status \"57d446d64b67c0818cad06c3971d182b345c0e57325fbb048babf0c677237a71\": rpc error: code = NotFound desc = could not find container \"57d446d64b67c0818cad06c3971d182b345c0e57325fbb048babf0c677237a71\": container with ID starting with 57d446d64b67c0818cad06c3971d182b345c0e57325fbb048babf0c677237a71 not found: ID does not exist" Feb 28 05:41:34 crc kubenswrapper[5014]: I0228 05:41:34.183166 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51ecd474-a8ea-4849-8a90-ef530ce65e82" path="/var/lib/kubelet/pods/51ecd474-a8ea-4849-8a90-ef530ce65e82/volumes" Feb 28 05:41:37 crc kubenswrapper[5014]: I0228 05:41:37.171964 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:41:37 crc kubenswrapper[5014]: E0228 05:41:37.173145 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:41:52 crc kubenswrapper[5014]: I0228 05:41:52.180675 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:41:52 crc kubenswrapper[5014]: E0228 05:41:52.181654 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:41:59 crc kubenswrapper[5014]: I0228 05:41:59.561906 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-tl2qx_52613a39-487f-4a3e-b2fb-97e969552377/kube-rbac-proxy/0.log" Feb 28 05:41:59 crc kubenswrapper[5014]: I0228 05:41:59.774372 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-tl2qx_52613a39-487f-4a3e-b2fb-97e969552377/controller/0.log" Feb 28 05:41:59 crc kubenswrapper[5014]: I0228 05:41:59.834463 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/cp-frr-files/0.log" Feb 28 05:41:59 crc kubenswrapper[5014]: I0228 05:41:59.962665 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/cp-metrics/0.log" Feb 28 05:41:59 crc kubenswrapper[5014]: I0228 05:41:59.983227 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/cp-reloader/0.log" Feb 28 05:41:59 crc kubenswrapper[5014]: I0228 05:41:59.988198 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/cp-frr-files/0.log" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.040762 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/cp-reloader/0.log" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.139375 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537622-c6sv6"] Feb 28 05:42:00 crc kubenswrapper[5014]: E0228 05:42:00.140028 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ecd474-a8ea-4849-8a90-ef530ce65e82" containerName="registry-server" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.140052 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ecd474-a8ea-4849-8a90-ef530ce65e82" containerName="registry-server" Feb 28 05:42:00 crc kubenswrapper[5014]: E0228 05:42:00.140094 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ecd474-a8ea-4849-8a90-ef530ce65e82" containerName="extract-utilities" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.140101 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ecd474-a8ea-4849-8a90-ef530ce65e82" containerName="extract-utilities" Feb 28 05:42:00 crc kubenswrapper[5014]: E0228 05:42:00.140118 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ecd474-a8ea-4849-8a90-ef530ce65e82" containerName="extract-content" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.140126 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ecd474-a8ea-4849-8a90-ef530ce65e82" containerName="extract-content" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.140304 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ecd474-a8ea-4849-8a90-ef530ce65e82" containerName="registry-server" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.140940 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537622-c6sv6" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.143190 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.143278 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.143338 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.151446 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537622-c6sv6"] Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.187659 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/cp-metrics/0.log" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.196221 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/cp-frr-files/0.log" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.243440 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/cp-metrics/0.log" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.247889 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/cp-reloader/0.log" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.304725 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xft2l\" (UniqueName: \"kubernetes.io/projected/17e0a40c-e90d-42e3-845e-bc6f8d32c111-kube-api-access-xft2l\") pod \"auto-csr-approver-29537622-c6sv6\" (UID: \"17e0a40c-e90d-42e3-845e-bc6f8d32c111\") " pod="openshift-infra/auto-csr-approver-29537622-c6sv6" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.398564 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/cp-reloader/0.log" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.406937 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xft2l\" (UniqueName: \"kubernetes.io/projected/17e0a40c-e90d-42e3-845e-bc6f8d32c111-kube-api-access-xft2l\") pod \"auto-csr-approver-29537622-c6sv6\" (UID: \"17e0a40c-e90d-42e3-845e-bc6f8d32c111\") " pod="openshift-infra/auto-csr-approver-29537622-c6sv6" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.438383 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xft2l\" (UniqueName: \"kubernetes.io/projected/17e0a40c-e90d-42e3-845e-bc6f8d32c111-kube-api-access-xft2l\") pod \"auto-csr-approver-29537622-c6sv6\" (UID: \"17e0a40c-e90d-42e3-845e-bc6f8d32c111\") " pod="openshift-infra/auto-csr-approver-29537622-c6sv6" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.460964 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/cp-frr-files/0.log" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.462414 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537622-c6sv6" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.465833 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/controller/0.log" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.508540 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/cp-metrics/0.log" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.660374 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/frr-metrics/0.log" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.772002 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/kube-rbac-proxy/0.log" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.787203 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/kube-rbac-proxy-frr/0.log" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.889886 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/reloader/0.log" Feb 28 05:42:00 crc kubenswrapper[5014]: I0228 05:42:00.972351 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537622-c6sv6"] Feb 28 05:42:01 crc kubenswrapper[5014]: I0228 05:42:01.077427 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-vwrdt_d1916ff1-d765-4133-8db7-50b8c6c9d3da/frr-k8s-webhook-server/0.log" Feb 28 05:42:01 crc kubenswrapper[5014]: I0228 05:42:01.211565 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537622-c6sv6" event={"ID":"17e0a40c-e90d-42e3-845e-bc6f8d32c111","Type":"ContainerStarted","Data":"0a5ee78cba89890287efbfc1a56c9c41a4bdc76300f488751b5761080d354666"} Feb 28 05:42:01 crc kubenswrapper[5014]: I0228 05:42:01.254272 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-c97d79cb8-9k7r6_7765e634-9939-4dca-82bc-847db81c81e4/manager/0.log" Feb 28 05:42:01 crc kubenswrapper[5014]: I0228 05:42:01.323620 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-75b5fcbdc5-txj9m_fec123b5-34af-438f-8a38-306d3484b235/webhook-server/0.log" Feb 28 05:42:01 crc kubenswrapper[5014]: I0228 05:42:01.522480 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-v6tb4_4e21c24c-ac78-4bff-863f-dfd7b10d0c7a/kube-rbac-proxy/0.log" Feb 28 05:42:01 crc kubenswrapper[5014]: I0228 05:42:01.998764 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-v6tb4_4e21c24c-ac78-4bff-863f-dfd7b10d0c7a/speaker/0.log" Feb 28 05:42:02 crc kubenswrapper[5014]: I0228 05:42:02.111943 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-rkp2w_cd8eb09e-7a57-4b01-b09c-519bbca4c5ed/frr/0.log" Feb 28 05:42:02 crc kubenswrapper[5014]: I0228 05:42:02.220122 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537622-c6sv6" event={"ID":"17e0a40c-e90d-42e3-845e-bc6f8d32c111","Type":"ContainerStarted","Data":"778e63242b00b597339ca60d932d05b01371a5bec31ae82093e3fad2a056e397"} Feb 28 05:42:02 crc kubenswrapper[5014]: I0228 05:42:02.237061 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537622-c6sv6" podStartSLOduration=1.429603406 podStartE2EDuration="2.237044648s" podCreationTimestamp="2026-02-28 05:42:00 +0000 UTC" firstStartedPulling="2026-02-28 05:42:00.961610788 +0000 UTC m=+4109.631736698" lastFinishedPulling="2026-02-28 05:42:01.76905203 +0000 UTC m=+4110.439177940" observedRunningTime="2026-02-28 05:42:02.230585134 +0000 UTC m=+4110.900711044" watchObservedRunningTime="2026-02-28 05:42:02.237044648 +0000 UTC m=+4110.907170558" Feb 28 05:42:03 crc kubenswrapper[5014]: I0228 05:42:03.230419 5014 generic.go:334] "Generic (PLEG): container finished" podID="17e0a40c-e90d-42e3-845e-bc6f8d32c111" containerID="778e63242b00b597339ca60d932d05b01371a5bec31ae82093e3fad2a056e397" exitCode=0 Feb 28 05:42:03 crc kubenswrapper[5014]: I0228 05:42:03.230464 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537622-c6sv6" event={"ID":"17e0a40c-e90d-42e3-845e-bc6f8d32c111","Type":"ContainerDied","Data":"778e63242b00b597339ca60d932d05b01371a5bec31ae82093e3fad2a056e397"} Feb 28 05:42:04 crc kubenswrapper[5014]: I0228 05:42:04.643779 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537622-c6sv6" Feb 28 05:42:04 crc kubenswrapper[5014]: I0228 05:42:04.798164 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xft2l\" (UniqueName: \"kubernetes.io/projected/17e0a40c-e90d-42e3-845e-bc6f8d32c111-kube-api-access-xft2l\") pod \"17e0a40c-e90d-42e3-845e-bc6f8d32c111\" (UID: \"17e0a40c-e90d-42e3-845e-bc6f8d32c111\") " Feb 28 05:42:04 crc kubenswrapper[5014]: I0228 05:42:04.805184 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17e0a40c-e90d-42e3-845e-bc6f8d32c111-kube-api-access-xft2l" (OuterVolumeSpecName: "kube-api-access-xft2l") pod "17e0a40c-e90d-42e3-845e-bc6f8d32c111" (UID: "17e0a40c-e90d-42e3-845e-bc6f8d32c111"). InnerVolumeSpecName "kube-api-access-xft2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:42:04 crc kubenswrapper[5014]: I0228 05:42:04.900902 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xft2l\" (UniqueName: \"kubernetes.io/projected/17e0a40c-e90d-42e3-845e-bc6f8d32c111-kube-api-access-xft2l\") on node \"crc\" DevicePath \"\"" Feb 28 05:42:05 crc kubenswrapper[5014]: I0228 05:42:05.250410 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537622-c6sv6" event={"ID":"17e0a40c-e90d-42e3-845e-bc6f8d32c111","Type":"ContainerDied","Data":"0a5ee78cba89890287efbfc1a56c9c41a4bdc76300f488751b5761080d354666"} Feb 28 05:42:05 crc kubenswrapper[5014]: I0228 05:42:05.250465 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a5ee78cba89890287efbfc1a56c9c41a4bdc76300f488751b5761080d354666" Feb 28 05:42:05 crc kubenswrapper[5014]: I0228 05:42:05.250496 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537622-c6sv6" Feb 28 05:42:05 crc kubenswrapper[5014]: I0228 05:42:05.256991 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537616-4smtl"] Feb 28 05:42:05 crc kubenswrapper[5014]: I0228 05:42:05.263847 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537616-4smtl"] Feb 28 05:42:06 crc kubenswrapper[5014]: I0228 05:42:06.188038 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c64a676-9036-492e-a5e4-6201694188a8" path="/var/lib/kubelet/pods/0c64a676-9036-492e-a5e4-6201694188a8/volumes" Feb 28 05:42:07 crc kubenswrapper[5014]: I0228 05:42:07.172417 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:42:07 crc kubenswrapper[5014]: E0228 05:42:07.172962 5014 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-cct62_openshift-machine-config-operator(6aad0009-d904-48f8-8e30-82205907ece1)\"" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" Feb 28 05:42:18 crc kubenswrapper[5014]: I0228 05:42:18.139464 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh_f0a92225-1e40-4c6b-af69-652221b1273a/util/0.log" Feb 28 05:42:18 crc kubenswrapper[5014]: I0228 05:42:18.300971 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh_f0a92225-1e40-4c6b-af69-652221b1273a/util/0.log" Feb 28 05:42:18 crc kubenswrapper[5014]: I0228 05:42:18.304643 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh_f0a92225-1e40-4c6b-af69-652221b1273a/pull/0.log" Feb 28 05:42:18 crc kubenswrapper[5014]: I0228 05:42:18.375513 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh_f0a92225-1e40-4c6b-af69-652221b1273a/pull/0.log" Feb 28 05:42:18 crc kubenswrapper[5014]: I0228 05:42:18.586511 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh_f0a92225-1e40-4c6b-af69-652221b1273a/pull/0.log" Feb 28 05:42:18 crc kubenswrapper[5014]: I0228 05:42:18.592161 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh_f0a92225-1e40-4c6b-af69-652221b1273a/extract/0.log" Feb 28 05:42:18 crc kubenswrapper[5014]: I0228 05:42:18.593261 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82kj7bh_f0a92225-1e40-4c6b-af69-652221b1273a/util/0.log" Feb 28 05:42:18 crc kubenswrapper[5014]: I0228 05:42:18.892560 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kc599_1286af62-b972-4b45-a18b-f7e0085a1a69/extract-utilities/0.log" Feb 28 05:42:19 crc kubenswrapper[5014]: I0228 05:42:19.004933 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kc599_1286af62-b972-4b45-a18b-f7e0085a1a69/extract-content/0.log" Feb 28 05:42:19 crc kubenswrapper[5014]: I0228 05:42:19.018112 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kc599_1286af62-b972-4b45-a18b-f7e0085a1a69/extract-content/0.log" Feb 28 05:42:19 crc kubenswrapper[5014]: I0228 05:42:19.063287 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kc599_1286af62-b972-4b45-a18b-f7e0085a1a69/extract-utilities/0.log" Feb 28 05:42:19 crc kubenswrapper[5014]: I0228 05:42:19.256022 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kc599_1286af62-b972-4b45-a18b-f7e0085a1a69/extract-utilities/0.log" Feb 28 05:42:19 crc kubenswrapper[5014]: I0228 05:42:19.269398 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kc599_1286af62-b972-4b45-a18b-f7e0085a1a69/extract-content/0.log" Feb 28 05:42:19 crc kubenswrapper[5014]: I0228 05:42:19.485396 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rmvfd_e65a2cc1-a391-48ab-a843-e86f58cf278a/extract-utilities/0.log" Feb 28 05:42:19 crc kubenswrapper[5014]: I0228 05:42:19.652834 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rmvfd_e65a2cc1-a391-48ab-a843-e86f58cf278a/extract-utilities/0.log" Feb 28 05:42:19 crc kubenswrapper[5014]: I0228 05:42:19.688946 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rmvfd_e65a2cc1-a391-48ab-a843-e86f58cf278a/extract-content/0.log" Feb 28 05:42:19 crc kubenswrapper[5014]: I0228 05:42:19.733447 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rmvfd_e65a2cc1-a391-48ab-a843-e86f58cf278a/extract-content/0.log" Feb 28 05:42:19 crc kubenswrapper[5014]: I0228 05:42:19.745765 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-kc599_1286af62-b972-4b45-a18b-f7e0085a1a69/registry-server/0.log" Feb 28 05:42:19 crc kubenswrapper[5014]: I0228 05:42:19.878406 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rmvfd_e65a2cc1-a391-48ab-a843-e86f58cf278a/extract-utilities/0.log" Feb 28 05:42:19 crc kubenswrapper[5014]: I0228 05:42:19.931108 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rmvfd_e65a2cc1-a391-48ab-a843-e86f58cf278a/extract-content/0.log" Feb 28 05:42:20 crc kubenswrapper[5014]: I0228 05:42:20.082850 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s_37c25bf9-a707-42db-9488-1cd660e44edc/util/0.log" Feb 28 05:42:20 crc kubenswrapper[5014]: I0228 05:42:20.433147 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-rmvfd_e65a2cc1-a391-48ab-a843-e86f58cf278a/registry-server/0.log" Feb 28 05:42:20 crc kubenswrapper[5014]: I0228 05:42:20.674687 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s_37c25bf9-a707-42db-9488-1cd660e44edc/pull/0.log" Feb 28 05:42:20 crc kubenswrapper[5014]: I0228 05:42:20.688080 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s_37c25bf9-a707-42db-9488-1cd660e44edc/util/0.log" Feb 28 05:42:20 crc kubenswrapper[5014]: I0228 05:42:20.700779 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s_37c25bf9-a707-42db-9488-1cd660e44edc/pull/0.log" Feb 28 05:42:20 crc kubenswrapper[5014]: I0228 05:42:20.923707 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s_37c25bf9-a707-42db-9488-1cd660e44edc/util/0.log" Feb 28 05:42:20 crc kubenswrapper[5014]: I0228 05:42:20.939518 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s_37c25bf9-a707-42db-9488-1cd660e44edc/pull/0.log" Feb 28 05:42:21 crc kubenswrapper[5014]: I0228 05:42:21.017153 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f49gd4s_37c25bf9-a707-42db-9488-1cd660e44edc/extract/0.log" Feb 28 05:42:21 crc kubenswrapper[5014]: I0228 05:42:21.146050 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n92hm_c78f8995-32df-4c90-9919-e5e6f53c16ed/extract-utilities/0.log" Feb 28 05:42:21 crc kubenswrapper[5014]: I0228 05:42:21.154254 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-lz2dz_da5f8445-0b83-49d2-8255-21a4074cbf0b/marketplace-operator/0.log" Feb 28 05:42:21 crc kubenswrapper[5014]: I0228 05:42:21.171556 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:42:21 crc kubenswrapper[5014]: I0228 05:42:21.316992 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n92hm_c78f8995-32df-4c90-9919-e5e6f53c16ed/extract-utilities/0.log" Feb 28 05:42:21 crc kubenswrapper[5014]: I0228 05:42:21.352400 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n92hm_c78f8995-32df-4c90-9919-e5e6f53c16ed/extract-content/0.log" Feb 28 05:42:21 crc kubenswrapper[5014]: I0228 05:42:21.382253 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n92hm_c78f8995-32df-4c90-9919-e5e6f53c16ed/extract-content/0.log" Feb 28 05:42:21 crc kubenswrapper[5014]: I0228 05:42:21.414455 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerStarted","Data":"ac2388f733415025bd6dcef6ebe0c612a2ded51ee4cdea38bfbf3a883792fe2e"} Feb 28 05:42:21 crc kubenswrapper[5014]: I0228 05:42:21.587124 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n92hm_c78f8995-32df-4c90-9919-e5e6f53c16ed/extract-content/0.log" Feb 28 05:42:21 crc kubenswrapper[5014]: I0228 05:42:21.654371 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n92hm_c78f8995-32df-4c90-9919-e5e6f53c16ed/extract-utilities/0.log" Feb 28 05:42:21 crc kubenswrapper[5014]: I0228 05:42:21.774311 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-n92hm_c78f8995-32df-4c90-9919-e5e6f53c16ed/registry-server/0.log" Feb 28 05:42:21 crc kubenswrapper[5014]: I0228 05:42:21.775766 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tfwt_86fe7b38-7d96-499b-a693-397309da77bd/extract-utilities/0.log" Feb 28 05:42:22 crc kubenswrapper[5014]: I0228 05:42:22.192445 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tfwt_86fe7b38-7d96-499b-a693-397309da77bd/extract-content/0.log" Feb 28 05:42:22 crc kubenswrapper[5014]: I0228 05:42:22.192614 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tfwt_86fe7b38-7d96-499b-a693-397309da77bd/extract-utilities/0.log" Feb 28 05:42:22 crc kubenswrapper[5014]: I0228 05:42:22.249358 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tfwt_86fe7b38-7d96-499b-a693-397309da77bd/extract-content/0.log" Feb 28 05:42:22 crc kubenswrapper[5014]: I0228 05:42:22.425222 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tfwt_86fe7b38-7d96-499b-a693-397309da77bd/extract-utilities/0.log" Feb 28 05:42:22 crc kubenswrapper[5014]: I0228 05:42:22.519981 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tfwt_86fe7b38-7d96-499b-a693-397309da77bd/extract-content/0.log" Feb 28 05:42:22 crc kubenswrapper[5014]: I0228 05:42:22.871133 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9tfwt_86fe7b38-7d96-499b-a693-397309da77bd/registry-server/0.log" Feb 28 05:43:00 crc kubenswrapper[5014]: I0228 05:43:00.209516 5014 scope.go:117] "RemoveContainer" containerID="3af1ef9dfa30f5b77d845a0f5f1aa838cdc400d09ee4408493bbb2670d7b4cad" Feb 28 05:44:00 crc kubenswrapper[5014]: I0228 05:44:00.189565 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537624-hvvfh"] Feb 28 05:44:00 crc kubenswrapper[5014]: E0228 05:44:00.190647 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17e0a40c-e90d-42e3-845e-bc6f8d32c111" containerName="oc" Feb 28 05:44:00 crc kubenswrapper[5014]: I0228 05:44:00.190668 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="17e0a40c-e90d-42e3-845e-bc6f8d32c111" containerName="oc" Feb 28 05:44:00 crc kubenswrapper[5014]: I0228 05:44:00.191173 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="17e0a40c-e90d-42e3-845e-bc6f8d32c111" containerName="oc" Feb 28 05:44:00 crc kubenswrapper[5014]: I0228 05:44:00.192157 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537624-hvvfh"] Feb 28 05:44:00 crc kubenswrapper[5014]: I0228 05:44:00.192322 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537624-hvvfh" Feb 28 05:44:00 crc kubenswrapper[5014]: I0228 05:44:00.195540 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:44:00 crc kubenswrapper[5014]: I0228 05:44:00.196721 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:44:00 crc kubenswrapper[5014]: I0228 05:44:00.196913 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:44:00 crc kubenswrapper[5014]: I0228 05:44:00.342858 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plb59\" (UniqueName: \"kubernetes.io/projected/e709c8c4-9123-41d3-8b44-6fdb41afbedc-kube-api-access-plb59\") pod \"auto-csr-approver-29537624-hvvfh\" (UID: \"e709c8c4-9123-41d3-8b44-6fdb41afbedc\") " pod="openshift-infra/auto-csr-approver-29537624-hvvfh" Feb 28 05:44:00 crc kubenswrapper[5014]: I0228 05:44:00.444763 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plb59\" (UniqueName: \"kubernetes.io/projected/e709c8c4-9123-41d3-8b44-6fdb41afbedc-kube-api-access-plb59\") pod \"auto-csr-approver-29537624-hvvfh\" (UID: \"e709c8c4-9123-41d3-8b44-6fdb41afbedc\") " pod="openshift-infra/auto-csr-approver-29537624-hvvfh" Feb 28 05:44:00 crc kubenswrapper[5014]: I0228 05:44:00.480552 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plb59\" (UniqueName: \"kubernetes.io/projected/e709c8c4-9123-41d3-8b44-6fdb41afbedc-kube-api-access-plb59\") pod \"auto-csr-approver-29537624-hvvfh\" (UID: \"e709c8c4-9123-41d3-8b44-6fdb41afbedc\") " pod="openshift-infra/auto-csr-approver-29537624-hvvfh" Feb 28 05:44:00 crc kubenswrapper[5014]: I0228 05:44:00.539011 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537624-hvvfh" Feb 28 05:44:01 crc kubenswrapper[5014]: I0228 05:44:01.059018 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537624-hvvfh"] Feb 28 05:44:01 crc kubenswrapper[5014]: I0228 05:44:01.580881 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537624-hvvfh" event={"ID":"e709c8c4-9123-41d3-8b44-6fdb41afbedc","Type":"ContainerStarted","Data":"2616eae8cf3c0d80a1a119bb245afd0f50dac5d77a9c8601c03d0fa0808b4c55"} Feb 28 05:44:02 crc kubenswrapper[5014]: I0228 05:44:02.592542 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537624-hvvfh" event={"ID":"e709c8c4-9123-41d3-8b44-6fdb41afbedc","Type":"ContainerStarted","Data":"8a58403a71ad0724e6e0185076ad78edc62734aeda840e08bbbb6267b0d6126c"} Feb 28 05:44:03 crc kubenswrapper[5014]: I0228 05:44:03.606283 5014 generic.go:334] "Generic (PLEG): container finished" podID="e709c8c4-9123-41d3-8b44-6fdb41afbedc" containerID="8a58403a71ad0724e6e0185076ad78edc62734aeda840e08bbbb6267b0d6126c" exitCode=0 Feb 28 05:44:03 crc kubenswrapper[5014]: I0228 05:44:03.606631 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537624-hvvfh" event={"ID":"e709c8c4-9123-41d3-8b44-6fdb41afbedc","Type":"ContainerDied","Data":"8a58403a71ad0724e6e0185076ad78edc62734aeda840e08bbbb6267b0d6126c"} Feb 28 05:44:04 crc kubenswrapper[5014]: I0228 05:44:04.185951 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537624-hvvfh" Feb 28 05:44:04 crc kubenswrapper[5014]: I0228 05:44:04.320158 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plb59\" (UniqueName: \"kubernetes.io/projected/e709c8c4-9123-41d3-8b44-6fdb41afbedc-kube-api-access-plb59\") pod \"e709c8c4-9123-41d3-8b44-6fdb41afbedc\" (UID: \"e709c8c4-9123-41d3-8b44-6fdb41afbedc\") " Feb 28 05:44:04 crc kubenswrapper[5014]: I0228 05:44:04.330120 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e709c8c4-9123-41d3-8b44-6fdb41afbedc-kube-api-access-plb59" (OuterVolumeSpecName: "kube-api-access-plb59") pod "e709c8c4-9123-41d3-8b44-6fdb41afbedc" (UID: "e709c8c4-9123-41d3-8b44-6fdb41afbedc"). InnerVolumeSpecName "kube-api-access-plb59". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:44:04 crc kubenswrapper[5014]: I0228 05:44:04.422749 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plb59\" (UniqueName: \"kubernetes.io/projected/e709c8c4-9123-41d3-8b44-6fdb41afbedc-kube-api-access-plb59\") on node \"crc\" DevicePath \"\"" Feb 28 05:44:04 crc kubenswrapper[5014]: I0228 05:44:04.618126 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537624-hvvfh" event={"ID":"e709c8c4-9123-41d3-8b44-6fdb41afbedc","Type":"ContainerDied","Data":"2616eae8cf3c0d80a1a119bb245afd0f50dac5d77a9c8601c03d0fa0808b4c55"} Feb 28 05:44:04 crc kubenswrapper[5014]: I0228 05:44:04.618185 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2616eae8cf3c0d80a1a119bb245afd0f50dac5d77a9c8601c03d0fa0808b4c55" Feb 28 05:44:04 crc kubenswrapper[5014]: I0228 05:44:04.618261 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537624-hvvfh" Feb 28 05:44:05 crc kubenswrapper[5014]: I0228 05:44:05.315897 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537618-htkqd"] Feb 28 05:44:05 crc kubenswrapper[5014]: I0228 05:44:05.330467 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537618-htkqd"] Feb 28 05:44:06 crc kubenswrapper[5014]: I0228 05:44:06.188731 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="676dc806-f1a9-4fb3-b811-7c015c0f62c1" path="/var/lib/kubelet/pods/676dc806-f1a9-4fb3-b811-7c015c0f62c1/volumes" Feb 28 05:44:06 crc kubenswrapper[5014]: I0228 05:44:06.641105 5014 generic.go:334] "Generic (PLEG): container finished" podID="bab79593-07cf-4a70-881f-fa06508b63af" containerID="6ce7a1813ea157c99d1cc6b62f15e8aefcbc8b50711ca43702e2931f45693b2b" exitCode=0 Feb 28 05:44:06 crc kubenswrapper[5014]: I0228 05:44:06.641149 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-v9fbc/must-gather-t6xkn" event={"ID":"bab79593-07cf-4a70-881f-fa06508b63af","Type":"ContainerDied","Data":"6ce7a1813ea157c99d1cc6b62f15e8aefcbc8b50711ca43702e2931f45693b2b"} Feb 28 05:44:06 crc kubenswrapper[5014]: I0228 05:44:06.641851 5014 scope.go:117] "RemoveContainer" containerID="6ce7a1813ea157c99d1cc6b62f15e8aefcbc8b50711ca43702e2931f45693b2b" Feb 28 05:44:07 crc kubenswrapper[5014]: I0228 05:44:07.693111 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v9fbc_must-gather-t6xkn_bab79593-07cf-4a70-881f-fa06508b63af/gather/0.log" Feb 28 05:44:17 crc kubenswrapper[5014]: I0228 05:44:17.887040 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-v9fbc/must-gather-t6xkn"] Feb 28 05:44:17 crc kubenswrapper[5014]: I0228 05:44:17.887991 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-v9fbc/must-gather-t6xkn" podUID="bab79593-07cf-4a70-881f-fa06508b63af" containerName="copy" containerID="cri-o://71f919f5f21a08514dddbc6c7c83574591e2eb8367456ead9ede8c091975270d" gracePeriod=2 Feb 28 05:44:17 crc kubenswrapper[5014]: I0228 05:44:17.898332 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-v9fbc/must-gather-t6xkn"] Feb 28 05:44:18 crc kubenswrapper[5014]: I0228 05:44:18.331372 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v9fbc_must-gather-t6xkn_bab79593-07cf-4a70-881f-fa06508b63af/copy/0.log" Feb 28 05:44:18 crc kubenswrapper[5014]: I0228 05:44:18.332437 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v9fbc/must-gather-t6xkn" Feb 28 05:44:18 crc kubenswrapper[5014]: I0228 05:44:18.402011 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmng4\" (UniqueName: \"kubernetes.io/projected/bab79593-07cf-4a70-881f-fa06508b63af-kube-api-access-pmng4\") pod \"bab79593-07cf-4a70-881f-fa06508b63af\" (UID: \"bab79593-07cf-4a70-881f-fa06508b63af\") " Feb 28 05:44:18 crc kubenswrapper[5014]: I0228 05:44:18.402121 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bab79593-07cf-4a70-881f-fa06508b63af-must-gather-output\") pod \"bab79593-07cf-4a70-881f-fa06508b63af\" (UID: \"bab79593-07cf-4a70-881f-fa06508b63af\") " Feb 28 05:44:18 crc kubenswrapper[5014]: I0228 05:44:18.408149 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bab79593-07cf-4a70-881f-fa06508b63af-kube-api-access-pmng4" (OuterVolumeSpecName: "kube-api-access-pmng4") pod "bab79593-07cf-4a70-881f-fa06508b63af" (UID: "bab79593-07cf-4a70-881f-fa06508b63af"). InnerVolumeSpecName "kube-api-access-pmng4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:44:18 crc kubenswrapper[5014]: I0228 05:44:18.504314 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmng4\" (UniqueName: \"kubernetes.io/projected/bab79593-07cf-4a70-881f-fa06508b63af-kube-api-access-pmng4\") on node \"crc\" DevicePath \"\"" Feb 28 05:44:18 crc kubenswrapper[5014]: I0228 05:44:18.595955 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bab79593-07cf-4a70-881f-fa06508b63af-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "bab79593-07cf-4a70-881f-fa06508b63af" (UID: "bab79593-07cf-4a70-881f-fa06508b63af"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:44:18 crc kubenswrapper[5014]: I0228 05:44:18.607261 5014 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/bab79593-07cf-4a70-881f-fa06508b63af-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 28 05:44:18 crc kubenswrapper[5014]: I0228 05:44:18.784157 5014 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-v9fbc_must-gather-t6xkn_bab79593-07cf-4a70-881f-fa06508b63af/copy/0.log" Feb 28 05:44:18 crc kubenswrapper[5014]: I0228 05:44:18.785244 5014 generic.go:334] "Generic (PLEG): container finished" podID="bab79593-07cf-4a70-881f-fa06508b63af" containerID="71f919f5f21a08514dddbc6c7c83574591e2eb8367456ead9ede8c091975270d" exitCode=143 Feb 28 05:44:18 crc kubenswrapper[5014]: I0228 05:44:18.785317 5014 scope.go:117] "RemoveContainer" containerID="71f919f5f21a08514dddbc6c7c83574591e2eb8367456ead9ede8c091975270d" Feb 28 05:44:18 crc kubenswrapper[5014]: I0228 05:44:18.785606 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-v9fbc/must-gather-t6xkn" Feb 28 05:44:18 crc kubenswrapper[5014]: I0228 05:44:18.831121 5014 scope.go:117] "RemoveContainer" containerID="6ce7a1813ea157c99d1cc6b62f15e8aefcbc8b50711ca43702e2931f45693b2b" Feb 28 05:44:18 crc kubenswrapper[5014]: I0228 05:44:18.933935 5014 scope.go:117] "RemoveContainer" containerID="71f919f5f21a08514dddbc6c7c83574591e2eb8367456ead9ede8c091975270d" Feb 28 05:44:18 crc kubenswrapper[5014]: E0228 05:44:18.934450 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"71f919f5f21a08514dddbc6c7c83574591e2eb8367456ead9ede8c091975270d\": container with ID starting with 71f919f5f21a08514dddbc6c7c83574591e2eb8367456ead9ede8c091975270d not found: ID does not exist" containerID="71f919f5f21a08514dddbc6c7c83574591e2eb8367456ead9ede8c091975270d" Feb 28 05:44:18 crc kubenswrapper[5014]: I0228 05:44:18.934499 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"71f919f5f21a08514dddbc6c7c83574591e2eb8367456ead9ede8c091975270d"} err="failed to get container status \"71f919f5f21a08514dddbc6c7c83574591e2eb8367456ead9ede8c091975270d\": rpc error: code = NotFound desc = could not find container \"71f919f5f21a08514dddbc6c7c83574591e2eb8367456ead9ede8c091975270d\": container with ID starting with 71f919f5f21a08514dddbc6c7c83574591e2eb8367456ead9ede8c091975270d not found: ID does not exist" Feb 28 05:44:18 crc kubenswrapper[5014]: I0228 05:44:18.934528 5014 scope.go:117] "RemoveContainer" containerID="6ce7a1813ea157c99d1cc6b62f15e8aefcbc8b50711ca43702e2931f45693b2b" Feb 28 05:44:18 crc kubenswrapper[5014]: E0228 05:44:18.939767 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ce7a1813ea157c99d1cc6b62f15e8aefcbc8b50711ca43702e2931f45693b2b\": container with ID starting with 6ce7a1813ea157c99d1cc6b62f15e8aefcbc8b50711ca43702e2931f45693b2b not found: ID does not exist" containerID="6ce7a1813ea157c99d1cc6b62f15e8aefcbc8b50711ca43702e2931f45693b2b" Feb 28 05:44:18 crc kubenswrapper[5014]: I0228 05:44:18.939834 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ce7a1813ea157c99d1cc6b62f15e8aefcbc8b50711ca43702e2931f45693b2b"} err="failed to get container status \"6ce7a1813ea157c99d1cc6b62f15e8aefcbc8b50711ca43702e2931f45693b2b\": rpc error: code = NotFound desc = could not find container \"6ce7a1813ea157c99d1cc6b62f15e8aefcbc8b50711ca43702e2931f45693b2b\": container with ID starting with 6ce7a1813ea157c99d1cc6b62f15e8aefcbc8b50711ca43702e2931f45693b2b not found: ID does not exist" Feb 28 05:44:20 crc kubenswrapper[5014]: I0228 05:44:20.183523 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bab79593-07cf-4a70-881f-fa06508b63af" path="/var/lib/kubelet/pods/bab79593-07cf-4a70-881f-fa06508b63af/volumes" Feb 28 05:44:45 crc kubenswrapper[5014]: I0228 05:44:45.706463 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:44:45 crc kubenswrapper[5014]: I0228 05:44:45.707224 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:45:00 crc kubenswrapper[5014]: I0228 05:45:00.171712 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537625-wnmqv"] Feb 28 05:45:00 crc kubenswrapper[5014]: E0228 05:45:00.173282 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab79593-07cf-4a70-881f-fa06508b63af" containerName="copy" Feb 28 05:45:00 crc kubenswrapper[5014]: I0228 05:45:00.173318 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab79593-07cf-4a70-881f-fa06508b63af" containerName="copy" Feb 28 05:45:00 crc kubenswrapper[5014]: E0228 05:45:00.173399 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bab79593-07cf-4a70-881f-fa06508b63af" containerName="gather" Feb 28 05:45:00 crc kubenswrapper[5014]: I0228 05:45:00.173420 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="bab79593-07cf-4a70-881f-fa06508b63af" containerName="gather" Feb 28 05:45:00 crc kubenswrapper[5014]: E0228 05:45:00.173449 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e709c8c4-9123-41d3-8b44-6fdb41afbedc" containerName="oc" Feb 28 05:45:00 crc kubenswrapper[5014]: I0228 05:45:00.173465 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="e709c8c4-9123-41d3-8b44-6fdb41afbedc" containerName="oc" Feb 28 05:45:00 crc kubenswrapper[5014]: I0228 05:45:00.173930 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab79593-07cf-4a70-881f-fa06508b63af" containerName="gather" Feb 28 05:45:00 crc kubenswrapper[5014]: I0228 05:45:00.173961 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="e709c8c4-9123-41d3-8b44-6fdb41afbedc" containerName="oc" Feb 28 05:45:00 crc kubenswrapper[5014]: I0228 05:45:00.174054 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="bab79593-07cf-4a70-881f-fa06508b63af" containerName="copy" Feb 28 05:45:00 crc kubenswrapper[5014]: I0228 05:45:00.175610 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537625-wnmqv" Feb 28 05:45:00 crc kubenswrapper[5014]: I0228 05:45:00.178255 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 28 05:45:00 crc kubenswrapper[5014]: I0228 05:45:00.182417 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 28 05:45:00 crc kubenswrapper[5014]: I0228 05:45:00.202897 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537625-wnmqv"] Feb 28 05:45:00 crc kubenswrapper[5014]: I0228 05:45:00.306326 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f919cd5b-ba46-44b6-9cb0-3664ca4841be-config-volume\") pod \"collect-profiles-29537625-wnmqv\" (UID: \"f919cd5b-ba46-44b6-9cb0-3664ca4841be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537625-wnmqv" Feb 28 05:45:00 crc kubenswrapper[5014]: I0228 05:45:00.306474 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f919cd5b-ba46-44b6-9cb0-3664ca4841be-secret-volume\") pod \"collect-profiles-29537625-wnmqv\" (UID: \"f919cd5b-ba46-44b6-9cb0-3664ca4841be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537625-wnmqv" Feb 28 05:45:00 crc kubenswrapper[5014]: I0228 05:45:00.306609 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jrsd\" (UniqueName: \"kubernetes.io/projected/f919cd5b-ba46-44b6-9cb0-3664ca4841be-kube-api-access-7jrsd\") pod \"collect-profiles-29537625-wnmqv\" (UID: \"f919cd5b-ba46-44b6-9cb0-3664ca4841be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537625-wnmqv" Feb 28 05:45:00 crc kubenswrapper[5014]: I0228 05:45:00.356533 5014 scope.go:117] "RemoveContainer" containerID="998f94e384cbec622704cdfc575364575f72ad4c48bead1520d240184b32957a" Feb 28 05:45:00 crc kubenswrapper[5014]: I0228 05:45:00.385672 5014 scope.go:117] "RemoveContainer" containerID="bf9307b0ee9fd74f9f90d7441d91d4c9ee1f67b59133aeeb713d65bcc6ebb064" Feb 28 05:45:00 crc kubenswrapper[5014]: I0228 05:45:00.410492 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jrsd\" (UniqueName: \"kubernetes.io/projected/f919cd5b-ba46-44b6-9cb0-3664ca4841be-kube-api-access-7jrsd\") pod \"collect-profiles-29537625-wnmqv\" (UID: \"f919cd5b-ba46-44b6-9cb0-3664ca4841be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537625-wnmqv" Feb 28 05:45:00 crc kubenswrapper[5014]: I0228 05:45:00.410566 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f919cd5b-ba46-44b6-9cb0-3664ca4841be-config-volume\") pod \"collect-profiles-29537625-wnmqv\" (UID: \"f919cd5b-ba46-44b6-9cb0-3664ca4841be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537625-wnmqv" Feb 28 05:45:00 crc kubenswrapper[5014]: I0228 05:45:00.410703 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f919cd5b-ba46-44b6-9cb0-3664ca4841be-secret-volume\") pod \"collect-profiles-29537625-wnmqv\" (UID: \"f919cd5b-ba46-44b6-9cb0-3664ca4841be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537625-wnmqv" Feb 28 05:45:00 crc kubenswrapper[5014]: I0228 05:45:00.412914 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f919cd5b-ba46-44b6-9cb0-3664ca4841be-config-volume\") pod \"collect-profiles-29537625-wnmqv\" (UID: \"f919cd5b-ba46-44b6-9cb0-3664ca4841be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537625-wnmqv" Feb 28 05:45:00 crc kubenswrapper[5014]: I0228 05:45:00.421307 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f919cd5b-ba46-44b6-9cb0-3664ca4841be-secret-volume\") pod \"collect-profiles-29537625-wnmqv\" (UID: \"f919cd5b-ba46-44b6-9cb0-3664ca4841be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537625-wnmqv" Feb 28 05:45:00 crc kubenswrapper[5014]: I0228 05:45:00.432920 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jrsd\" (UniqueName: \"kubernetes.io/projected/f919cd5b-ba46-44b6-9cb0-3664ca4841be-kube-api-access-7jrsd\") pod \"collect-profiles-29537625-wnmqv\" (UID: \"f919cd5b-ba46-44b6-9cb0-3664ca4841be\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537625-wnmqv" Feb 28 05:45:00 crc kubenswrapper[5014]: I0228 05:45:00.499700 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537625-wnmqv" Feb 28 05:45:00 crc kubenswrapper[5014]: I0228 05:45:00.968144 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537625-wnmqv"] Feb 28 05:45:01 crc kubenswrapper[5014]: I0228 05:45:01.272946 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537625-wnmqv" event={"ID":"f919cd5b-ba46-44b6-9cb0-3664ca4841be","Type":"ContainerStarted","Data":"940090e52c77b0ca187c2e6a758e87eb0fc0e770c2ed872e20e79510e84e64fa"} Feb 28 05:45:01 crc kubenswrapper[5014]: I0228 05:45:01.273347 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537625-wnmqv" event={"ID":"f919cd5b-ba46-44b6-9cb0-3664ca4841be","Type":"ContainerStarted","Data":"58c971d0ea553caa68116549cd0bffc57898a5ff583ca776d4a9d85084f329a8"} Feb 28 05:45:01 crc kubenswrapper[5014]: I0228 05:45:01.290430 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29537625-wnmqv" podStartSLOduration=1.290410086 podStartE2EDuration="1.290410086s" podCreationTimestamp="2026-02-28 05:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-28 05:45:01.288069973 +0000 UTC m=+4289.958195893" watchObservedRunningTime="2026-02-28 05:45:01.290410086 +0000 UTC m=+4289.960535996" Feb 28 05:45:02 crc kubenswrapper[5014]: I0228 05:45:02.286450 5014 generic.go:334] "Generic (PLEG): container finished" podID="f919cd5b-ba46-44b6-9cb0-3664ca4841be" containerID="940090e52c77b0ca187c2e6a758e87eb0fc0e770c2ed872e20e79510e84e64fa" exitCode=0 Feb 28 05:45:02 crc kubenswrapper[5014]: I0228 05:45:02.286533 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537625-wnmqv" event={"ID":"f919cd5b-ba46-44b6-9cb0-3664ca4841be","Type":"ContainerDied","Data":"940090e52c77b0ca187c2e6a758e87eb0fc0e770c2ed872e20e79510e84e64fa"} Feb 28 05:45:03 crc kubenswrapper[5014]: I0228 05:45:03.788307 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537625-wnmqv" Feb 28 05:45:03 crc kubenswrapper[5014]: I0228 05:45:03.990012 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f919cd5b-ba46-44b6-9cb0-3664ca4841be-config-volume\") pod \"f919cd5b-ba46-44b6-9cb0-3664ca4841be\" (UID: \"f919cd5b-ba46-44b6-9cb0-3664ca4841be\") " Feb 28 05:45:03 crc kubenswrapper[5014]: I0228 05:45:03.990112 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jrsd\" (UniqueName: \"kubernetes.io/projected/f919cd5b-ba46-44b6-9cb0-3664ca4841be-kube-api-access-7jrsd\") pod \"f919cd5b-ba46-44b6-9cb0-3664ca4841be\" (UID: \"f919cd5b-ba46-44b6-9cb0-3664ca4841be\") " Feb 28 05:45:03 crc kubenswrapper[5014]: I0228 05:45:03.990322 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f919cd5b-ba46-44b6-9cb0-3664ca4841be-secret-volume\") pod \"f919cd5b-ba46-44b6-9cb0-3664ca4841be\" (UID: \"f919cd5b-ba46-44b6-9cb0-3664ca4841be\") " Feb 28 05:45:03 crc kubenswrapper[5014]: I0228 05:45:03.991555 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f919cd5b-ba46-44b6-9cb0-3664ca4841be-config-volume" (OuterVolumeSpecName: "config-volume") pod "f919cd5b-ba46-44b6-9cb0-3664ca4841be" (UID: "f919cd5b-ba46-44b6-9cb0-3664ca4841be"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 28 05:45:04 crc kubenswrapper[5014]: I0228 05:45:04.092441 5014 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f919cd5b-ba46-44b6-9cb0-3664ca4841be-config-volume\") on node \"crc\" DevicePath \"\"" Feb 28 05:45:04 crc kubenswrapper[5014]: I0228 05:45:04.307516 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537625-wnmqv" event={"ID":"f919cd5b-ba46-44b6-9cb0-3664ca4841be","Type":"ContainerDied","Data":"58c971d0ea553caa68116549cd0bffc57898a5ff583ca776d4a9d85084f329a8"} Feb 28 05:45:04 crc kubenswrapper[5014]: I0228 05:45:04.307967 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58c971d0ea553caa68116549cd0bffc57898a5ff583ca776d4a9d85084f329a8" Feb 28 05:45:04 crc kubenswrapper[5014]: I0228 05:45:04.308170 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537625-wnmqv" Feb 28 05:45:04 crc kubenswrapper[5014]: I0228 05:45:04.392692 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537580-zsjhz"] Feb 28 05:45:04 crc kubenswrapper[5014]: I0228 05:45:04.403093 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537580-zsjhz"] Feb 28 05:45:04 crc kubenswrapper[5014]: I0228 05:45:04.610453 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f919cd5b-ba46-44b6-9cb0-3664ca4841be-kube-api-access-7jrsd" (OuterVolumeSpecName: "kube-api-access-7jrsd") pod "f919cd5b-ba46-44b6-9cb0-3664ca4841be" (UID: "f919cd5b-ba46-44b6-9cb0-3664ca4841be"). InnerVolumeSpecName "kube-api-access-7jrsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:45:04 crc kubenswrapper[5014]: I0228 05:45:04.610777 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f919cd5b-ba46-44b6-9cb0-3664ca4841be-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f919cd5b-ba46-44b6-9cb0-3664ca4841be" (UID: "f919cd5b-ba46-44b6-9cb0-3664ca4841be"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 28 05:45:04 crc kubenswrapper[5014]: I0228 05:45:04.702944 5014 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f919cd5b-ba46-44b6-9cb0-3664ca4841be-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 28 05:45:04 crc kubenswrapper[5014]: I0228 05:45:04.703357 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jrsd\" (UniqueName: \"kubernetes.io/projected/f919cd5b-ba46-44b6-9cb0-3664ca4841be-kube-api-access-7jrsd\") on node \"crc\" DevicePath \"\"" Feb 28 05:45:06 crc kubenswrapper[5014]: I0228 05:45:06.188252 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b41ac68c-5535-42b1-81e3-c802c005f146" path="/var/lib/kubelet/pods/b41ac68c-5535-42b1-81e3-c802c005f146/volumes" Feb 28 05:45:15 crc kubenswrapper[5014]: I0228 05:45:15.706624 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:45:15 crc kubenswrapper[5014]: I0228 05:45:15.707183 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:45:23 crc kubenswrapper[5014]: I0228 05:45:23.482481 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-75pck"] Feb 28 05:45:23 crc kubenswrapper[5014]: E0228 05:45:23.483537 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f919cd5b-ba46-44b6-9cb0-3664ca4841be" containerName="collect-profiles" Feb 28 05:45:23 crc kubenswrapper[5014]: I0228 05:45:23.483552 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="f919cd5b-ba46-44b6-9cb0-3664ca4841be" containerName="collect-profiles" Feb 28 05:45:23 crc kubenswrapper[5014]: I0228 05:45:23.483781 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="f919cd5b-ba46-44b6-9cb0-3664ca4841be" containerName="collect-profiles" Feb 28 05:45:23 crc kubenswrapper[5014]: I0228 05:45:23.485295 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-75pck" Feb 28 05:45:23 crc kubenswrapper[5014]: I0228 05:45:23.514500 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-75pck"] Feb 28 05:45:23 crc kubenswrapper[5014]: I0228 05:45:23.617733 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fgqk\" (UniqueName: \"kubernetes.io/projected/5a5c0749-f88e-44c2-a0da-409f15e642d1-kube-api-access-6fgqk\") pod \"redhat-marketplace-75pck\" (UID: \"5a5c0749-f88e-44c2-a0da-409f15e642d1\") " pod="openshift-marketplace/redhat-marketplace-75pck" Feb 28 05:45:23 crc kubenswrapper[5014]: I0228 05:45:23.617958 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a5c0749-f88e-44c2-a0da-409f15e642d1-catalog-content\") pod \"redhat-marketplace-75pck\" (UID: \"5a5c0749-f88e-44c2-a0da-409f15e642d1\") " pod="openshift-marketplace/redhat-marketplace-75pck" Feb 28 05:45:23 crc kubenswrapper[5014]: I0228 05:45:23.618117 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a5c0749-f88e-44c2-a0da-409f15e642d1-utilities\") pod \"redhat-marketplace-75pck\" (UID: \"5a5c0749-f88e-44c2-a0da-409f15e642d1\") " pod="openshift-marketplace/redhat-marketplace-75pck" Feb 28 05:45:23 crc kubenswrapper[5014]: I0228 05:45:23.720382 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fgqk\" (UniqueName: \"kubernetes.io/projected/5a5c0749-f88e-44c2-a0da-409f15e642d1-kube-api-access-6fgqk\") pod \"redhat-marketplace-75pck\" (UID: \"5a5c0749-f88e-44c2-a0da-409f15e642d1\") " pod="openshift-marketplace/redhat-marketplace-75pck" Feb 28 05:45:23 crc kubenswrapper[5014]: I0228 05:45:23.720458 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a5c0749-f88e-44c2-a0da-409f15e642d1-catalog-content\") pod \"redhat-marketplace-75pck\" (UID: \"5a5c0749-f88e-44c2-a0da-409f15e642d1\") " pod="openshift-marketplace/redhat-marketplace-75pck" Feb 28 05:45:23 crc kubenswrapper[5014]: I0228 05:45:23.720504 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a5c0749-f88e-44c2-a0da-409f15e642d1-utilities\") pod \"redhat-marketplace-75pck\" (UID: \"5a5c0749-f88e-44c2-a0da-409f15e642d1\") " pod="openshift-marketplace/redhat-marketplace-75pck" Feb 28 05:45:23 crc kubenswrapper[5014]: I0228 05:45:23.721190 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a5c0749-f88e-44c2-a0da-409f15e642d1-catalog-content\") pod \"redhat-marketplace-75pck\" (UID: \"5a5c0749-f88e-44c2-a0da-409f15e642d1\") " pod="openshift-marketplace/redhat-marketplace-75pck" Feb 28 05:45:23 crc kubenswrapper[5014]: I0228 05:45:23.721240 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a5c0749-f88e-44c2-a0da-409f15e642d1-utilities\") pod \"redhat-marketplace-75pck\" (UID: \"5a5c0749-f88e-44c2-a0da-409f15e642d1\") " pod="openshift-marketplace/redhat-marketplace-75pck" Feb 28 05:45:23 crc kubenswrapper[5014]: I0228 05:45:23.742051 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fgqk\" (UniqueName: \"kubernetes.io/projected/5a5c0749-f88e-44c2-a0da-409f15e642d1-kube-api-access-6fgqk\") pod \"redhat-marketplace-75pck\" (UID: \"5a5c0749-f88e-44c2-a0da-409f15e642d1\") " pod="openshift-marketplace/redhat-marketplace-75pck" Feb 28 05:45:23 crc kubenswrapper[5014]: I0228 05:45:23.808877 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-75pck" Feb 28 05:45:24 crc kubenswrapper[5014]: I0228 05:45:24.332383 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-75pck"] Feb 28 05:45:24 crc kubenswrapper[5014]: I0228 05:45:24.529163 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75pck" event={"ID":"5a5c0749-f88e-44c2-a0da-409f15e642d1","Type":"ContainerStarted","Data":"9f167d339fe335cbb147b7e5ea3ab963a65acbdbaa25da696b750498daf4cd3d"} Feb 28 05:45:25 crc kubenswrapper[5014]: I0228 05:45:25.544217 5014 generic.go:334] "Generic (PLEG): container finished" podID="5a5c0749-f88e-44c2-a0da-409f15e642d1" containerID="be592c538f13e2b6b65b543c6dd56a57b79e974b0e2788a254737d5b3bf1c5c6" exitCode=0 Feb 28 05:45:25 crc kubenswrapper[5014]: I0228 05:45:25.544633 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75pck" event={"ID":"5a5c0749-f88e-44c2-a0da-409f15e642d1","Type":"ContainerDied","Data":"be592c538f13e2b6b65b543c6dd56a57b79e974b0e2788a254737d5b3bf1c5c6"} Feb 28 05:45:25 crc kubenswrapper[5014]: I0228 05:45:25.546670 5014 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 28 05:45:26 crc kubenswrapper[5014]: I0228 05:45:26.574013 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75pck" event={"ID":"5a5c0749-f88e-44c2-a0da-409f15e642d1","Type":"ContainerStarted","Data":"d4000798ab6e9fd6ff554022fc6a799d0b93eb8e0379c308731225c41d900865"} Feb 28 05:45:27 crc kubenswrapper[5014]: I0228 05:45:27.592103 5014 generic.go:334] "Generic (PLEG): container finished" podID="5a5c0749-f88e-44c2-a0da-409f15e642d1" containerID="d4000798ab6e9fd6ff554022fc6a799d0b93eb8e0379c308731225c41d900865" exitCode=0 Feb 28 05:45:27 crc kubenswrapper[5014]: I0228 05:45:27.592452 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75pck" event={"ID":"5a5c0749-f88e-44c2-a0da-409f15e642d1","Type":"ContainerDied","Data":"d4000798ab6e9fd6ff554022fc6a799d0b93eb8e0379c308731225c41d900865"} Feb 28 05:45:28 crc kubenswrapper[5014]: I0228 05:45:28.601706 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75pck" event={"ID":"5a5c0749-f88e-44c2-a0da-409f15e642d1","Type":"ContainerStarted","Data":"320286c7a9c74fa7d26a8eda688cdb0d70e80c9377fc002d0c409525dfbfe542"} Feb 28 05:45:28 crc kubenswrapper[5014]: I0228 05:45:28.625664 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-75pck" podStartSLOduration=3.174528266 podStartE2EDuration="5.625648294s" podCreationTimestamp="2026-02-28 05:45:23 +0000 UTC" firstStartedPulling="2026-02-28 05:45:25.546441194 +0000 UTC m=+4314.216567104" lastFinishedPulling="2026-02-28 05:45:27.997561192 +0000 UTC m=+4316.667687132" observedRunningTime="2026-02-28 05:45:28.618875901 +0000 UTC m=+4317.289001811" watchObservedRunningTime="2026-02-28 05:45:28.625648294 +0000 UTC m=+4317.295774204" Feb 28 05:45:33 crc kubenswrapper[5014]: I0228 05:45:33.810030 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-75pck" Feb 28 05:45:33 crc kubenswrapper[5014]: I0228 05:45:33.810894 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-75pck" Feb 28 05:45:33 crc kubenswrapper[5014]: I0228 05:45:33.897971 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-75pck" Feb 28 05:45:34 crc kubenswrapper[5014]: I0228 05:45:34.732245 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-75pck" Feb 28 05:45:34 crc kubenswrapper[5014]: I0228 05:45:34.806512 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-75pck"] Feb 28 05:45:36 crc kubenswrapper[5014]: I0228 05:45:36.694438 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-75pck" podUID="5a5c0749-f88e-44c2-a0da-409f15e642d1" containerName="registry-server" containerID="cri-o://320286c7a9c74fa7d26a8eda688cdb0d70e80c9377fc002d0c409525dfbfe542" gracePeriod=2 Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.219505 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-75pck" Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.321855 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a5c0749-f88e-44c2-a0da-409f15e642d1-catalog-content\") pod \"5a5c0749-f88e-44c2-a0da-409f15e642d1\" (UID: \"5a5c0749-f88e-44c2-a0da-409f15e642d1\") " Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.322521 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fgqk\" (UniqueName: \"kubernetes.io/projected/5a5c0749-f88e-44c2-a0da-409f15e642d1-kube-api-access-6fgqk\") pod \"5a5c0749-f88e-44c2-a0da-409f15e642d1\" (UID: \"5a5c0749-f88e-44c2-a0da-409f15e642d1\") " Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.322591 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a5c0749-f88e-44c2-a0da-409f15e642d1-utilities\") pod \"5a5c0749-f88e-44c2-a0da-409f15e642d1\" (UID: \"5a5c0749-f88e-44c2-a0da-409f15e642d1\") " Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.323682 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a5c0749-f88e-44c2-a0da-409f15e642d1-utilities" (OuterVolumeSpecName: "utilities") pod "5a5c0749-f88e-44c2-a0da-409f15e642d1" (UID: "5a5c0749-f88e-44c2-a0da-409f15e642d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.324383 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a5c0749-f88e-44c2-a0da-409f15e642d1-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.332086 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a5c0749-f88e-44c2-a0da-409f15e642d1-kube-api-access-6fgqk" (OuterVolumeSpecName: "kube-api-access-6fgqk") pod "5a5c0749-f88e-44c2-a0da-409f15e642d1" (UID: "5a5c0749-f88e-44c2-a0da-409f15e642d1"). InnerVolumeSpecName "kube-api-access-6fgqk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.358265 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a5c0749-f88e-44c2-a0da-409f15e642d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a5c0749-f88e-44c2-a0da-409f15e642d1" (UID: "5a5c0749-f88e-44c2-a0da-409f15e642d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.426724 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fgqk\" (UniqueName: \"kubernetes.io/projected/5a5c0749-f88e-44c2-a0da-409f15e642d1-kube-api-access-6fgqk\") on node \"crc\" DevicePath \"\"" Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.426788 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a5c0749-f88e-44c2-a0da-409f15e642d1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.713965 5014 generic.go:334] "Generic (PLEG): container finished" podID="5a5c0749-f88e-44c2-a0da-409f15e642d1" containerID="320286c7a9c74fa7d26a8eda688cdb0d70e80c9377fc002d0c409525dfbfe542" exitCode=0 Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.714035 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75pck" event={"ID":"5a5c0749-f88e-44c2-a0da-409f15e642d1","Type":"ContainerDied","Data":"320286c7a9c74fa7d26a8eda688cdb0d70e80c9377fc002d0c409525dfbfe542"} Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.714084 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-75pck" event={"ID":"5a5c0749-f88e-44c2-a0da-409f15e642d1","Type":"ContainerDied","Data":"9f167d339fe335cbb147b7e5ea3ab963a65acbdbaa25da696b750498daf4cd3d"} Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.714089 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-75pck" Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.714114 5014 scope.go:117] "RemoveContainer" containerID="320286c7a9c74fa7d26a8eda688cdb0d70e80c9377fc002d0c409525dfbfe542" Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.741062 5014 scope.go:117] "RemoveContainer" containerID="d4000798ab6e9fd6ff554022fc6a799d0b93eb8e0379c308731225c41d900865" Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.763339 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-75pck"] Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.764434 5014 scope.go:117] "RemoveContainer" containerID="be592c538f13e2b6b65b543c6dd56a57b79e974b0e2788a254737d5b3bf1c5c6" Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.772643 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-75pck"] Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.824874 5014 scope.go:117] "RemoveContainer" containerID="320286c7a9c74fa7d26a8eda688cdb0d70e80c9377fc002d0c409525dfbfe542" Feb 28 05:45:37 crc kubenswrapper[5014]: E0228 05:45:37.825254 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"320286c7a9c74fa7d26a8eda688cdb0d70e80c9377fc002d0c409525dfbfe542\": container with ID starting with 320286c7a9c74fa7d26a8eda688cdb0d70e80c9377fc002d0c409525dfbfe542 not found: ID does not exist" containerID="320286c7a9c74fa7d26a8eda688cdb0d70e80c9377fc002d0c409525dfbfe542" Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.825283 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"320286c7a9c74fa7d26a8eda688cdb0d70e80c9377fc002d0c409525dfbfe542"} err="failed to get container status \"320286c7a9c74fa7d26a8eda688cdb0d70e80c9377fc002d0c409525dfbfe542\": rpc error: code = NotFound desc = could not find container \"320286c7a9c74fa7d26a8eda688cdb0d70e80c9377fc002d0c409525dfbfe542\": container with ID starting with 320286c7a9c74fa7d26a8eda688cdb0d70e80c9377fc002d0c409525dfbfe542 not found: ID does not exist" Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.825302 5014 scope.go:117] "RemoveContainer" containerID="d4000798ab6e9fd6ff554022fc6a799d0b93eb8e0379c308731225c41d900865" Feb 28 05:45:37 crc kubenswrapper[5014]: E0228 05:45:37.825639 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4000798ab6e9fd6ff554022fc6a799d0b93eb8e0379c308731225c41d900865\": container with ID starting with d4000798ab6e9fd6ff554022fc6a799d0b93eb8e0379c308731225c41d900865 not found: ID does not exist" containerID="d4000798ab6e9fd6ff554022fc6a799d0b93eb8e0379c308731225c41d900865" Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.825718 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4000798ab6e9fd6ff554022fc6a799d0b93eb8e0379c308731225c41d900865"} err="failed to get container status \"d4000798ab6e9fd6ff554022fc6a799d0b93eb8e0379c308731225c41d900865\": rpc error: code = NotFound desc = could not find container \"d4000798ab6e9fd6ff554022fc6a799d0b93eb8e0379c308731225c41d900865\": container with ID starting with d4000798ab6e9fd6ff554022fc6a799d0b93eb8e0379c308731225c41d900865 not found: ID does not exist" Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.825750 5014 scope.go:117] "RemoveContainer" containerID="be592c538f13e2b6b65b543c6dd56a57b79e974b0e2788a254737d5b3bf1c5c6" Feb 28 05:45:37 crc kubenswrapper[5014]: E0228 05:45:37.826047 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be592c538f13e2b6b65b543c6dd56a57b79e974b0e2788a254737d5b3bf1c5c6\": container with ID starting with be592c538f13e2b6b65b543c6dd56a57b79e974b0e2788a254737d5b3bf1c5c6 not found: ID does not exist" containerID="be592c538f13e2b6b65b543c6dd56a57b79e974b0e2788a254737d5b3bf1c5c6" Feb 28 05:45:37 crc kubenswrapper[5014]: I0228 05:45:37.826069 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be592c538f13e2b6b65b543c6dd56a57b79e974b0e2788a254737d5b3bf1c5c6"} err="failed to get container status \"be592c538f13e2b6b65b543c6dd56a57b79e974b0e2788a254737d5b3bf1c5c6\": rpc error: code = NotFound desc = could not find container \"be592c538f13e2b6b65b543c6dd56a57b79e974b0e2788a254737d5b3bf1c5c6\": container with ID starting with be592c538f13e2b6b65b543c6dd56a57b79e974b0e2788a254737d5b3bf1c5c6 not found: ID does not exist" Feb 28 05:45:38 crc kubenswrapper[5014]: I0228 05:45:38.187007 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a5c0749-f88e-44c2-a0da-409f15e642d1" path="/var/lib/kubelet/pods/5a5c0749-f88e-44c2-a0da-409f15e642d1/volumes" Feb 28 05:45:45 crc kubenswrapper[5014]: I0228 05:45:45.707247 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:45:45 crc kubenswrapper[5014]: I0228 05:45:45.708172 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:45:45 crc kubenswrapper[5014]: I0228 05:45:45.708240 5014 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-cct62" Feb 28 05:45:45 crc kubenswrapper[5014]: I0228 05:45:45.709364 5014 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ac2388f733415025bd6dcef6ebe0c612a2ded51ee4cdea38bfbf3a883792fe2e"} pod="openshift-machine-config-operator/machine-config-daemon-cct62" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 28 05:45:45 crc kubenswrapper[5014]: I0228 05:45:45.709488 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" containerID="cri-o://ac2388f733415025bd6dcef6ebe0c612a2ded51ee4cdea38bfbf3a883792fe2e" gracePeriod=600 Feb 28 05:45:46 crc kubenswrapper[5014]: I0228 05:45:46.831712 5014 generic.go:334] "Generic (PLEG): container finished" podID="6aad0009-d904-48f8-8e30-82205907ece1" containerID="ac2388f733415025bd6dcef6ebe0c612a2ded51ee4cdea38bfbf3a883792fe2e" exitCode=0 Feb 28 05:45:46 crc kubenswrapper[5014]: I0228 05:45:46.831769 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerDied","Data":"ac2388f733415025bd6dcef6ebe0c612a2ded51ee4cdea38bfbf3a883792fe2e"} Feb 28 05:45:46 crc kubenswrapper[5014]: I0228 05:45:46.832459 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-cct62" event={"ID":"6aad0009-d904-48f8-8e30-82205907ece1","Type":"ContainerStarted","Data":"9665ed1b63c017c78a5cc1452812e22ba7e610749a22294752f7c346ef85af49"} Feb 28 05:45:46 crc kubenswrapper[5014]: I0228 05:45:46.832494 5014 scope.go:117] "RemoveContainer" containerID="40477bc19fb801309320ac64cfcdd068229d097d3d2605fd0be9518095f50e19" Feb 28 05:46:00 crc kubenswrapper[5014]: I0228 05:46:00.168615 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537626-r8pkq"] Feb 28 05:46:00 crc kubenswrapper[5014]: E0228 05:46:00.169847 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a5c0749-f88e-44c2-a0da-409f15e642d1" containerName="extract-content" Feb 28 05:46:00 crc kubenswrapper[5014]: I0228 05:46:00.169868 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a5c0749-f88e-44c2-a0da-409f15e642d1" containerName="extract-content" Feb 28 05:46:00 crc kubenswrapper[5014]: E0228 05:46:00.169918 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a5c0749-f88e-44c2-a0da-409f15e642d1" containerName="registry-server" Feb 28 05:46:00 crc kubenswrapper[5014]: I0228 05:46:00.169932 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a5c0749-f88e-44c2-a0da-409f15e642d1" containerName="registry-server" Feb 28 05:46:00 crc kubenswrapper[5014]: E0228 05:46:00.169989 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a5c0749-f88e-44c2-a0da-409f15e642d1" containerName="extract-utilities" Feb 28 05:46:00 crc kubenswrapper[5014]: I0228 05:46:00.170004 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a5c0749-f88e-44c2-a0da-409f15e642d1" containerName="extract-utilities" Feb 28 05:46:00 crc kubenswrapper[5014]: I0228 05:46:00.170388 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a5c0749-f88e-44c2-a0da-409f15e642d1" containerName="registry-server" Feb 28 05:46:00 crc kubenswrapper[5014]: I0228 05:46:00.171783 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537626-r8pkq" Feb 28 05:46:00 crc kubenswrapper[5014]: I0228 05:46:00.174622 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:46:00 crc kubenswrapper[5014]: I0228 05:46:00.174880 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:46:00 crc kubenswrapper[5014]: I0228 05:46:00.175458 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:46:00 crc kubenswrapper[5014]: I0228 05:46:00.189451 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537626-r8pkq"] Feb 28 05:46:00 crc kubenswrapper[5014]: I0228 05:46:00.259369 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwqcs\" (UniqueName: \"kubernetes.io/projected/3b1c065c-0200-4063-aaff-c0773434fdd3-kube-api-access-rwqcs\") pod \"auto-csr-approver-29537626-r8pkq\" (UID: \"3b1c065c-0200-4063-aaff-c0773434fdd3\") " pod="openshift-infra/auto-csr-approver-29537626-r8pkq" Feb 28 05:46:00 crc kubenswrapper[5014]: I0228 05:46:00.362710 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwqcs\" (UniqueName: \"kubernetes.io/projected/3b1c065c-0200-4063-aaff-c0773434fdd3-kube-api-access-rwqcs\") pod \"auto-csr-approver-29537626-r8pkq\" (UID: \"3b1c065c-0200-4063-aaff-c0773434fdd3\") " pod="openshift-infra/auto-csr-approver-29537626-r8pkq" Feb 28 05:46:00 crc kubenswrapper[5014]: I0228 05:46:00.492937 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwqcs\" (UniqueName: \"kubernetes.io/projected/3b1c065c-0200-4063-aaff-c0773434fdd3-kube-api-access-rwqcs\") pod \"auto-csr-approver-29537626-r8pkq\" (UID: \"3b1c065c-0200-4063-aaff-c0773434fdd3\") " pod="openshift-infra/auto-csr-approver-29537626-r8pkq" Feb 28 05:46:00 crc kubenswrapper[5014]: I0228 05:46:00.520224 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537626-r8pkq" Feb 28 05:46:00 crc kubenswrapper[5014]: I0228 05:46:00.614788 5014 scope.go:117] "RemoveContainer" containerID="d17cc5e46448a11b318b7f0bcbef86f6b14109e4154153176a104be7edecabd3" Feb 28 05:46:00 crc kubenswrapper[5014]: I0228 05:46:00.672258 5014 scope.go:117] "RemoveContainer" containerID="1478f02cc29f04a1c6095a0fe53e641c12d0c24c81c98c69363e4be9b412517f" Feb 28 05:46:01 crc kubenswrapper[5014]: I0228 05:46:01.028853 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537626-r8pkq"] Feb 28 05:46:01 crc kubenswrapper[5014]: W0228 05:46:01.036008 5014 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3b1c065c_0200_4063_aaff_c0773434fdd3.slice/crio-d91420deaa5f96434a0e45a181b021d2d69c2ae6e696b90cfc38e8894925ba62 WatchSource:0}: Error finding container d91420deaa5f96434a0e45a181b021d2d69c2ae6e696b90cfc38e8894925ba62: Status 404 returned error can't find the container with id d91420deaa5f96434a0e45a181b021d2d69c2ae6e696b90cfc38e8894925ba62 Feb 28 05:46:02 crc kubenswrapper[5014]: I0228 05:46:02.010895 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537626-r8pkq" event={"ID":"3b1c065c-0200-4063-aaff-c0773434fdd3","Type":"ContainerStarted","Data":"d91420deaa5f96434a0e45a181b021d2d69c2ae6e696b90cfc38e8894925ba62"} Feb 28 05:46:03 crc kubenswrapper[5014]: I0228 05:46:03.025426 5014 generic.go:334] "Generic (PLEG): container finished" podID="3b1c065c-0200-4063-aaff-c0773434fdd3" containerID="2dfc4fbe32f3d6a6f77302f8c0f9100653751b608508c18b8c82458a919eb66b" exitCode=0 Feb 28 05:46:03 crc kubenswrapper[5014]: I0228 05:46:03.025700 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537626-r8pkq" event={"ID":"3b1c065c-0200-4063-aaff-c0773434fdd3","Type":"ContainerDied","Data":"2dfc4fbe32f3d6a6f77302f8c0f9100653751b608508c18b8c82458a919eb66b"} Feb 28 05:46:04 crc kubenswrapper[5014]: I0228 05:46:04.396667 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537626-r8pkq" Feb 28 05:46:04 crc kubenswrapper[5014]: I0228 05:46:04.466324 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwqcs\" (UniqueName: \"kubernetes.io/projected/3b1c065c-0200-4063-aaff-c0773434fdd3-kube-api-access-rwqcs\") pod \"3b1c065c-0200-4063-aaff-c0773434fdd3\" (UID: \"3b1c065c-0200-4063-aaff-c0773434fdd3\") " Feb 28 05:46:04 crc kubenswrapper[5014]: I0228 05:46:04.474482 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b1c065c-0200-4063-aaff-c0773434fdd3-kube-api-access-rwqcs" (OuterVolumeSpecName: "kube-api-access-rwqcs") pod "3b1c065c-0200-4063-aaff-c0773434fdd3" (UID: "3b1c065c-0200-4063-aaff-c0773434fdd3"). InnerVolumeSpecName "kube-api-access-rwqcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:46:04 crc kubenswrapper[5014]: I0228 05:46:04.570002 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwqcs\" (UniqueName: \"kubernetes.io/projected/3b1c065c-0200-4063-aaff-c0773434fdd3-kube-api-access-rwqcs\") on node \"crc\" DevicePath \"\"" Feb 28 05:46:05 crc kubenswrapper[5014]: I0228 05:46:05.047959 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537626-r8pkq" event={"ID":"3b1c065c-0200-4063-aaff-c0773434fdd3","Type":"ContainerDied","Data":"d91420deaa5f96434a0e45a181b021d2d69c2ae6e696b90cfc38e8894925ba62"} Feb 28 05:46:05 crc kubenswrapper[5014]: I0228 05:46:05.048015 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d91420deaa5f96434a0e45a181b021d2d69c2ae6e696b90cfc38e8894925ba62" Feb 28 05:46:05 crc kubenswrapper[5014]: I0228 05:46:05.048135 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537626-r8pkq" Feb 28 05:46:05 crc kubenswrapper[5014]: I0228 05:46:05.484354 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537620-rp6br"] Feb 28 05:46:05 crc kubenswrapper[5014]: I0228 05:46:05.492146 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537620-rp6br"] Feb 28 05:46:06 crc kubenswrapper[5014]: I0228 05:46:06.194964 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="350ac0d4-0be2-4765-85f4-2305c7ae8971" path="/var/lib/kubelet/pods/350ac0d4-0be2-4765-85f4-2305c7ae8971/volumes" Feb 28 05:46:46 crc kubenswrapper[5014]: I0228 05:46:46.785000 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-9r5tp"] Feb 28 05:46:46 crc kubenswrapper[5014]: E0228 05:46:46.786266 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b1c065c-0200-4063-aaff-c0773434fdd3" containerName="oc" Feb 28 05:46:46 crc kubenswrapper[5014]: I0228 05:46:46.786290 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b1c065c-0200-4063-aaff-c0773434fdd3" containerName="oc" Feb 28 05:46:46 crc kubenswrapper[5014]: I0228 05:46:46.786703 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b1c065c-0200-4063-aaff-c0773434fdd3" containerName="oc" Feb 28 05:46:46 crc kubenswrapper[5014]: I0228 05:46:46.790163 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9r5tp" Feb 28 05:46:46 crc kubenswrapper[5014]: I0228 05:46:46.807438 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9r5tp"] Feb 28 05:46:46 crc kubenswrapper[5014]: I0228 05:46:46.851964 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d642929c-1989-4735-bb09-c4a9f1b7c420-catalog-content\") pod \"certified-operators-9r5tp\" (UID: \"d642929c-1989-4735-bb09-c4a9f1b7c420\") " pod="openshift-marketplace/certified-operators-9r5tp" Feb 28 05:46:46 crc kubenswrapper[5014]: I0228 05:46:46.852311 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d642929c-1989-4735-bb09-c4a9f1b7c420-utilities\") pod \"certified-operators-9r5tp\" (UID: \"d642929c-1989-4735-bb09-c4a9f1b7c420\") " pod="openshift-marketplace/certified-operators-9r5tp" Feb 28 05:46:46 crc kubenswrapper[5014]: I0228 05:46:46.852346 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cr2h\" (UniqueName: \"kubernetes.io/projected/d642929c-1989-4735-bb09-c4a9f1b7c420-kube-api-access-4cr2h\") pod \"certified-operators-9r5tp\" (UID: \"d642929c-1989-4735-bb09-c4a9f1b7c420\") " pod="openshift-marketplace/certified-operators-9r5tp" Feb 28 05:46:46 crc kubenswrapper[5014]: I0228 05:46:46.953665 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d642929c-1989-4735-bb09-c4a9f1b7c420-catalog-content\") pod \"certified-operators-9r5tp\" (UID: \"d642929c-1989-4735-bb09-c4a9f1b7c420\") " pod="openshift-marketplace/certified-operators-9r5tp" Feb 28 05:46:46 crc kubenswrapper[5014]: I0228 05:46:46.953724 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d642929c-1989-4735-bb09-c4a9f1b7c420-utilities\") pod \"certified-operators-9r5tp\" (UID: \"d642929c-1989-4735-bb09-c4a9f1b7c420\") " pod="openshift-marketplace/certified-operators-9r5tp" Feb 28 05:46:46 crc kubenswrapper[5014]: I0228 05:46:46.953751 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cr2h\" (UniqueName: \"kubernetes.io/projected/d642929c-1989-4735-bb09-c4a9f1b7c420-kube-api-access-4cr2h\") pod \"certified-operators-9r5tp\" (UID: \"d642929c-1989-4735-bb09-c4a9f1b7c420\") " pod="openshift-marketplace/certified-operators-9r5tp" Feb 28 05:46:46 crc kubenswrapper[5014]: I0228 05:46:46.954306 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d642929c-1989-4735-bb09-c4a9f1b7c420-catalog-content\") pod \"certified-operators-9r5tp\" (UID: \"d642929c-1989-4735-bb09-c4a9f1b7c420\") " pod="openshift-marketplace/certified-operators-9r5tp" Feb 28 05:46:46 crc kubenswrapper[5014]: I0228 05:46:46.954518 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d642929c-1989-4735-bb09-c4a9f1b7c420-utilities\") pod \"certified-operators-9r5tp\" (UID: \"d642929c-1989-4735-bb09-c4a9f1b7c420\") " pod="openshift-marketplace/certified-operators-9r5tp" Feb 28 05:46:46 crc kubenswrapper[5014]: I0228 05:46:46.987438 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cr2h\" (UniqueName: \"kubernetes.io/projected/d642929c-1989-4735-bb09-c4a9f1b7c420-kube-api-access-4cr2h\") pod \"certified-operators-9r5tp\" (UID: \"d642929c-1989-4735-bb09-c4a9f1b7c420\") " pod="openshift-marketplace/certified-operators-9r5tp" Feb 28 05:46:47 crc kubenswrapper[5014]: I0228 05:46:47.134067 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9r5tp" Feb 28 05:46:47 crc kubenswrapper[5014]: I0228 05:46:47.585146 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-9r5tp"] Feb 28 05:46:48 crc kubenswrapper[5014]: I0228 05:46:48.589492 5014 generic.go:334] "Generic (PLEG): container finished" podID="d642929c-1989-4735-bb09-c4a9f1b7c420" containerID="14d88956a6fefbace709730d0a0124b751879467a629c4ff8099f070eb7b101c" exitCode=0 Feb 28 05:46:48 crc kubenswrapper[5014]: I0228 05:46:48.589956 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9r5tp" event={"ID":"d642929c-1989-4735-bb09-c4a9f1b7c420","Type":"ContainerDied","Data":"14d88956a6fefbace709730d0a0124b751879467a629c4ff8099f070eb7b101c"} Feb 28 05:46:48 crc kubenswrapper[5014]: I0228 05:46:48.590012 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9r5tp" event={"ID":"d642929c-1989-4735-bb09-c4a9f1b7c420","Type":"ContainerStarted","Data":"a51014d67f2820e92045914903fbdbe7e35bdf83b2486b23e8cb26b5fe56d069"} Feb 28 05:46:49 crc kubenswrapper[5014]: I0228 05:46:49.603603 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9r5tp" event={"ID":"d642929c-1989-4735-bb09-c4a9f1b7c420","Type":"ContainerStarted","Data":"ce4aead5815c736c18a06972e688ac1c20f45150242cf5643098692a060d9a2a"} Feb 28 05:46:50 crc kubenswrapper[5014]: I0228 05:46:50.620583 5014 generic.go:334] "Generic (PLEG): container finished" podID="d642929c-1989-4735-bb09-c4a9f1b7c420" containerID="ce4aead5815c736c18a06972e688ac1c20f45150242cf5643098692a060d9a2a" exitCode=0 Feb 28 05:46:50 crc kubenswrapper[5014]: I0228 05:46:50.620676 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9r5tp" event={"ID":"d642929c-1989-4735-bb09-c4a9f1b7c420","Type":"ContainerDied","Data":"ce4aead5815c736c18a06972e688ac1c20f45150242cf5643098692a060d9a2a"} Feb 28 05:46:51 crc kubenswrapper[5014]: I0228 05:46:51.634716 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9r5tp" event={"ID":"d642929c-1989-4735-bb09-c4a9f1b7c420","Type":"ContainerStarted","Data":"2d26396bc700faed45ca90df791aedbda18d9deb046e7befa72906d7308872ef"} Feb 28 05:46:51 crc kubenswrapper[5014]: I0228 05:46:51.672476 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-9r5tp" podStartSLOduration=3.267881193 podStartE2EDuration="5.67245895s" podCreationTimestamp="2026-02-28 05:46:46 +0000 UTC" firstStartedPulling="2026-02-28 05:46:48.592981405 +0000 UTC m=+4397.263107345" lastFinishedPulling="2026-02-28 05:46:50.997559162 +0000 UTC m=+4399.667685102" observedRunningTime="2026-02-28 05:46:51.665196596 +0000 UTC m=+4400.335322526" watchObservedRunningTime="2026-02-28 05:46:51.67245895 +0000 UTC m=+4400.342584860" Feb 28 05:46:57 crc kubenswrapper[5014]: I0228 05:46:57.135354 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-9r5tp" Feb 28 05:46:57 crc kubenswrapper[5014]: I0228 05:46:57.136161 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-9r5tp" Feb 28 05:46:57 crc kubenswrapper[5014]: I0228 05:46:57.211319 5014 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-9r5tp" Feb 28 05:46:57 crc kubenswrapper[5014]: I0228 05:46:57.798144 5014 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-9r5tp" Feb 28 05:46:59 crc kubenswrapper[5014]: I0228 05:46:59.371332 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9r5tp"] Feb 28 05:46:59 crc kubenswrapper[5014]: I0228 05:46:59.726890 5014 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-9r5tp" podUID="d642929c-1989-4735-bb09-c4a9f1b7c420" containerName="registry-server" containerID="cri-o://2d26396bc700faed45ca90df791aedbda18d9deb046e7befa72906d7308872ef" gracePeriod=2 Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.357699 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9r5tp" Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.400584 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d642929c-1989-4735-bb09-c4a9f1b7c420-catalog-content\") pod \"d642929c-1989-4735-bb09-c4a9f1b7c420\" (UID: \"d642929c-1989-4735-bb09-c4a9f1b7c420\") " Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.402896 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d642929c-1989-4735-bb09-c4a9f1b7c420-utilities\") pod \"d642929c-1989-4735-bb09-c4a9f1b7c420\" (UID: \"d642929c-1989-4735-bb09-c4a9f1b7c420\") " Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.403517 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cr2h\" (UniqueName: \"kubernetes.io/projected/d642929c-1989-4735-bb09-c4a9f1b7c420-kube-api-access-4cr2h\") pod \"d642929c-1989-4735-bb09-c4a9f1b7c420\" (UID: \"d642929c-1989-4735-bb09-c4a9f1b7c420\") " Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.404154 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d642929c-1989-4735-bb09-c4a9f1b7c420-utilities" (OuterVolumeSpecName: "utilities") pod "d642929c-1989-4735-bb09-c4a9f1b7c420" (UID: "d642929c-1989-4735-bb09-c4a9f1b7c420"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.407272 5014 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d642929c-1989-4735-bb09-c4a9f1b7c420-utilities\") on node \"crc\" DevicePath \"\"" Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.409668 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d642929c-1989-4735-bb09-c4a9f1b7c420-kube-api-access-4cr2h" (OuterVolumeSpecName: "kube-api-access-4cr2h") pod "d642929c-1989-4735-bb09-c4a9f1b7c420" (UID: "d642929c-1989-4735-bb09-c4a9f1b7c420"). InnerVolumeSpecName "kube-api-access-4cr2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.453836 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d642929c-1989-4735-bb09-c4a9f1b7c420-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d642929c-1989-4735-bb09-c4a9f1b7c420" (UID: "d642929c-1989-4735-bb09-c4a9f1b7c420"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.509074 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cr2h\" (UniqueName: \"kubernetes.io/projected/d642929c-1989-4735-bb09-c4a9f1b7c420-kube-api-access-4cr2h\") on node \"crc\" DevicePath \"\"" Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.509106 5014 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d642929c-1989-4735-bb09-c4a9f1b7c420-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.736907 5014 generic.go:334] "Generic (PLEG): container finished" podID="d642929c-1989-4735-bb09-c4a9f1b7c420" containerID="2d26396bc700faed45ca90df791aedbda18d9deb046e7befa72906d7308872ef" exitCode=0 Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.736974 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9r5tp" event={"ID":"d642929c-1989-4735-bb09-c4a9f1b7c420","Type":"ContainerDied","Data":"2d26396bc700faed45ca90df791aedbda18d9deb046e7befa72906d7308872ef"} Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.736983 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-9r5tp" Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.737018 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-9r5tp" event={"ID":"d642929c-1989-4735-bb09-c4a9f1b7c420","Type":"ContainerDied","Data":"a51014d67f2820e92045914903fbdbe7e35bdf83b2486b23e8cb26b5fe56d069"} Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.737054 5014 scope.go:117] "RemoveContainer" containerID="2d26396bc700faed45ca90df791aedbda18d9deb046e7befa72906d7308872ef" Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.765096 5014 scope.go:117] "RemoveContainer" containerID="ce4aead5815c736c18a06972e688ac1c20f45150242cf5643098692a060d9a2a" Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.784309 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-9r5tp"] Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.787651 5014 scope.go:117] "RemoveContainer" containerID="4bc253b5d178348d9346b2163eb96300a8270858ebb5c4f1035f66810f361084" Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.791550 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-9r5tp"] Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.811012 5014 scope.go:117] "RemoveContainer" containerID="14d88956a6fefbace709730d0a0124b751879467a629c4ff8099f070eb7b101c" Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.948627 5014 scope.go:117] "RemoveContainer" containerID="2d26396bc700faed45ca90df791aedbda18d9deb046e7befa72906d7308872ef" Feb 28 05:47:00 crc kubenswrapper[5014]: E0228 05:47:00.949628 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d26396bc700faed45ca90df791aedbda18d9deb046e7befa72906d7308872ef\": container with ID starting with 2d26396bc700faed45ca90df791aedbda18d9deb046e7befa72906d7308872ef not found: ID does not exist" containerID="2d26396bc700faed45ca90df791aedbda18d9deb046e7befa72906d7308872ef" Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.949694 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d26396bc700faed45ca90df791aedbda18d9deb046e7befa72906d7308872ef"} err="failed to get container status \"2d26396bc700faed45ca90df791aedbda18d9deb046e7befa72906d7308872ef\": rpc error: code = NotFound desc = could not find container \"2d26396bc700faed45ca90df791aedbda18d9deb046e7befa72906d7308872ef\": container with ID starting with 2d26396bc700faed45ca90df791aedbda18d9deb046e7befa72906d7308872ef not found: ID does not exist" Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.949734 5014 scope.go:117] "RemoveContainer" containerID="ce4aead5815c736c18a06972e688ac1c20f45150242cf5643098692a060d9a2a" Feb 28 05:47:00 crc kubenswrapper[5014]: E0228 05:47:00.950274 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce4aead5815c736c18a06972e688ac1c20f45150242cf5643098692a060d9a2a\": container with ID starting with ce4aead5815c736c18a06972e688ac1c20f45150242cf5643098692a060d9a2a not found: ID does not exist" containerID="ce4aead5815c736c18a06972e688ac1c20f45150242cf5643098692a060d9a2a" Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.950315 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce4aead5815c736c18a06972e688ac1c20f45150242cf5643098692a060d9a2a"} err="failed to get container status \"ce4aead5815c736c18a06972e688ac1c20f45150242cf5643098692a060d9a2a\": rpc error: code = NotFound desc = could not find container \"ce4aead5815c736c18a06972e688ac1c20f45150242cf5643098692a060d9a2a\": container with ID starting with ce4aead5815c736c18a06972e688ac1c20f45150242cf5643098692a060d9a2a not found: ID does not exist" Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.950343 5014 scope.go:117] "RemoveContainer" containerID="14d88956a6fefbace709730d0a0124b751879467a629c4ff8099f070eb7b101c" Feb 28 05:47:00 crc kubenswrapper[5014]: E0228 05:47:00.950728 5014 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14d88956a6fefbace709730d0a0124b751879467a629c4ff8099f070eb7b101c\": container with ID starting with 14d88956a6fefbace709730d0a0124b751879467a629c4ff8099f070eb7b101c not found: ID does not exist" containerID="14d88956a6fefbace709730d0a0124b751879467a629c4ff8099f070eb7b101c" Feb 28 05:47:00 crc kubenswrapper[5014]: I0228 05:47:00.950767 5014 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14d88956a6fefbace709730d0a0124b751879467a629c4ff8099f070eb7b101c"} err="failed to get container status \"14d88956a6fefbace709730d0a0124b751879467a629c4ff8099f070eb7b101c\": rpc error: code = NotFound desc = could not find container \"14d88956a6fefbace709730d0a0124b751879467a629c4ff8099f070eb7b101c\": container with ID starting with 14d88956a6fefbace709730d0a0124b751879467a629c4ff8099f070eb7b101c not found: ID does not exist" Feb 28 05:47:02 crc kubenswrapper[5014]: I0228 05:47:02.189407 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d642929c-1989-4735-bb09-c4a9f1b7c420" path="/var/lib/kubelet/pods/d642929c-1989-4735-bb09-c4a9f1b7c420/volumes" Feb 28 05:48:00 crc kubenswrapper[5014]: I0228 05:48:00.166574 5014 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537628-7plvc"] Feb 28 05:48:00 crc kubenswrapper[5014]: E0228 05:48:00.168226 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d642929c-1989-4735-bb09-c4a9f1b7c420" containerName="extract-utilities" Feb 28 05:48:00 crc kubenswrapper[5014]: I0228 05:48:00.168257 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="d642929c-1989-4735-bb09-c4a9f1b7c420" containerName="extract-utilities" Feb 28 05:48:00 crc kubenswrapper[5014]: E0228 05:48:00.168295 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d642929c-1989-4735-bb09-c4a9f1b7c420" containerName="registry-server" Feb 28 05:48:00 crc kubenswrapper[5014]: I0228 05:48:00.168312 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="d642929c-1989-4735-bb09-c4a9f1b7c420" containerName="registry-server" Feb 28 05:48:00 crc kubenswrapper[5014]: E0228 05:48:00.168343 5014 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d642929c-1989-4735-bb09-c4a9f1b7c420" containerName="extract-content" Feb 28 05:48:00 crc kubenswrapper[5014]: I0228 05:48:00.168360 5014 state_mem.go:107] "Deleted CPUSet assignment" podUID="d642929c-1989-4735-bb09-c4a9f1b7c420" containerName="extract-content" Feb 28 05:48:00 crc kubenswrapper[5014]: I0228 05:48:00.168996 5014 memory_manager.go:354] "RemoveStaleState removing state" podUID="d642929c-1989-4735-bb09-c4a9f1b7c420" containerName="registry-server" Feb 28 05:48:00 crc kubenswrapper[5014]: I0228 05:48:00.170466 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537628-7plvc" Feb 28 05:48:00 crc kubenswrapper[5014]: I0228 05:48:00.175180 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 28 05:48:00 crc kubenswrapper[5014]: I0228 05:48:00.177111 5014 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 28 05:48:00 crc kubenswrapper[5014]: I0228 05:48:00.177443 5014 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-r6pdk" Feb 28 05:48:00 crc kubenswrapper[5014]: I0228 05:48:00.184410 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537628-7plvc"] Feb 28 05:48:00 crc kubenswrapper[5014]: I0228 05:48:00.309914 5014 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv8q7\" (UniqueName: \"kubernetes.io/projected/727d70b8-d442-4198-9e0b-4cb82fa0e345-kube-api-access-tv8q7\") pod \"auto-csr-approver-29537628-7plvc\" (UID: \"727d70b8-d442-4198-9e0b-4cb82fa0e345\") " pod="openshift-infra/auto-csr-approver-29537628-7plvc" Feb 28 05:48:00 crc kubenswrapper[5014]: I0228 05:48:00.412299 5014 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tv8q7\" (UniqueName: \"kubernetes.io/projected/727d70b8-d442-4198-9e0b-4cb82fa0e345-kube-api-access-tv8q7\") pod \"auto-csr-approver-29537628-7plvc\" (UID: \"727d70b8-d442-4198-9e0b-4cb82fa0e345\") " pod="openshift-infra/auto-csr-approver-29537628-7plvc" Feb 28 05:48:00 crc kubenswrapper[5014]: I0228 05:48:00.701788 5014 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tv8q7\" (UniqueName: \"kubernetes.io/projected/727d70b8-d442-4198-9e0b-4cb82fa0e345-kube-api-access-tv8q7\") pod \"auto-csr-approver-29537628-7plvc\" (UID: \"727d70b8-d442-4198-9e0b-4cb82fa0e345\") " pod="openshift-infra/auto-csr-approver-29537628-7plvc" Feb 28 05:48:00 crc kubenswrapper[5014]: I0228 05:48:00.828642 5014 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537628-7plvc" Feb 28 05:48:01 crc kubenswrapper[5014]: I0228 05:48:01.380056 5014 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537628-7plvc"] Feb 28 05:48:01 crc kubenswrapper[5014]: I0228 05:48:01.454917 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537628-7plvc" event={"ID":"727d70b8-d442-4198-9e0b-4cb82fa0e345","Type":"ContainerStarted","Data":"9b28cdf24334cdd26ded2cecccf2b6c940761801eeded7d715b2a7f6106625ff"} Feb 28 05:48:03 crc kubenswrapper[5014]: I0228 05:48:03.480586 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537628-7plvc" event={"ID":"727d70b8-d442-4198-9e0b-4cb82fa0e345","Type":"ContainerStarted","Data":"f37bc78cc4590b43e51c63d6031561b299ee1c57a83bb28cf6447a5b5a65f058"} Feb 28 05:48:03 crc kubenswrapper[5014]: I0228 05:48:03.508456 5014 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537628-7plvc" podStartSLOduration=2.5801728969999997 podStartE2EDuration="3.508435507s" podCreationTimestamp="2026-02-28 05:48:00 +0000 UTC" firstStartedPulling="2026-02-28 05:48:01.393440343 +0000 UTC m=+4470.063566253" lastFinishedPulling="2026-02-28 05:48:02.321702943 +0000 UTC m=+4470.991828863" observedRunningTime="2026-02-28 05:48:03.497486444 +0000 UTC m=+4472.167612384" watchObservedRunningTime="2026-02-28 05:48:03.508435507 +0000 UTC m=+4472.178561427" Feb 28 05:48:04 crc kubenswrapper[5014]: I0228 05:48:04.493980 5014 generic.go:334] "Generic (PLEG): container finished" podID="727d70b8-d442-4198-9e0b-4cb82fa0e345" containerID="f37bc78cc4590b43e51c63d6031561b299ee1c57a83bb28cf6447a5b5a65f058" exitCode=0 Feb 28 05:48:04 crc kubenswrapper[5014]: I0228 05:48:04.494022 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537628-7plvc" event={"ID":"727d70b8-d442-4198-9e0b-4cb82fa0e345","Type":"ContainerDied","Data":"f37bc78cc4590b43e51c63d6031561b299ee1c57a83bb28cf6447a5b5a65f058"} Feb 28 05:48:05 crc kubenswrapper[5014]: I0228 05:48:05.918290 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537628-7plvc" Feb 28 05:48:06 crc kubenswrapper[5014]: I0228 05:48:06.081481 5014 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv8q7\" (UniqueName: \"kubernetes.io/projected/727d70b8-d442-4198-9e0b-4cb82fa0e345-kube-api-access-tv8q7\") pod \"727d70b8-d442-4198-9e0b-4cb82fa0e345\" (UID: \"727d70b8-d442-4198-9e0b-4cb82fa0e345\") " Feb 28 05:48:06 crc kubenswrapper[5014]: I0228 05:48:06.087835 5014 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/727d70b8-d442-4198-9e0b-4cb82fa0e345-kube-api-access-tv8q7" (OuterVolumeSpecName: "kube-api-access-tv8q7") pod "727d70b8-d442-4198-9e0b-4cb82fa0e345" (UID: "727d70b8-d442-4198-9e0b-4cb82fa0e345"). InnerVolumeSpecName "kube-api-access-tv8q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 28 05:48:06 crc kubenswrapper[5014]: I0228 05:48:06.183802 5014 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tv8q7\" (UniqueName: \"kubernetes.io/projected/727d70b8-d442-4198-9e0b-4cb82fa0e345-kube-api-access-tv8q7\") on node \"crc\" DevicePath \"\"" Feb 28 05:48:06 crc kubenswrapper[5014]: I0228 05:48:06.561920 5014 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537628-7plvc" event={"ID":"727d70b8-d442-4198-9e0b-4cb82fa0e345","Type":"ContainerDied","Data":"9b28cdf24334cdd26ded2cecccf2b6c940761801eeded7d715b2a7f6106625ff"} Feb 28 05:48:06 crc kubenswrapper[5014]: I0228 05:48:06.561967 5014 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b28cdf24334cdd26ded2cecccf2b6c940761801eeded7d715b2a7f6106625ff" Feb 28 05:48:06 crc kubenswrapper[5014]: I0228 05:48:06.561977 5014 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537628-7plvc" Feb 28 05:48:06 crc kubenswrapper[5014]: I0228 05:48:06.581404 5014 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537622-c6sv6"] Feb 28 05:48:06 crc kubenswrapper[5014]: I0228 05:48:06.592204 5014 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537622-c6sv6"] Feb 28 05:48:08 crc kubenswrapper[5014]: I0228 05:48:08.192475 5014 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17e0a40c-e90d-42e3-845e-bc6f8d32c111" path="/var/lib/kubelet/pods/17e0a40c-e90d-42e3-845e-bc6f8d32c111/volumes" Feb 28 05:48:15 crc kubenswrapper[5014]: I0228 05:48:15.706860 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:48:15 crc kubenswrapper[5014]: I0228 05:48:15.707513 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 28 05:48:45 crc kubenswrapper[5014]: I0228 05:48:45.706085 5014 patch_prober.go:28] interesting pod/machine-config-daemon-cct62 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 28 05:48:45 crc kubenswrapper[5014]: I0228 05:48:45.706600 5014 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-cct62" podUID="6aad0009-d904-48f8-8e30-82205907ece1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"